- 23 May, 2023 6 commits
-
-
John Fastabend authored
A common mechanism to put a TCP socket into the sockmap is to hook the BPF_SOCK_OPS_{ACTIVE_PASSIVE}_ESTABLISHED_CB event with a BPF program that can map the socket info to the correct BPF verdict parser. When the user adds the socket to the map the psock is created and the new ops are assigned to ensure the verdict program will 'see' the sk_buffs as they arrive. Part of this process hooks the sk_data_ready op with a BPF specific handler to wake up the BPF verdict program when data is ready to read. The logic is simple enough (posted here for easy reading) static void sk_psock_verdict_data_ready(struct sock *sk) { struct socket *sock = sk->sk_socket; if (unlikely(!sock || !sock->ops || !sock->ops->read_skb)) return; sock->ops->read_skb(sk, sk_psock_verdict_recv); } The oversight here is sk->sk_socket is not assigned until the application accepts() the new socket. However, its entirely ok for the peer application to do a connect() followed immediately by sends. The socket on the receiver is sitting on the backlog queue of the listening socket until its accepted and the data is queued up. If the peer never accepts the socket or is slow it will eventually hit data limits and rate limit the session. But, important for BPF sockmap hooks when this data is received TCP stack does the sk_data_ready() call but the read_skb() for this data is never called because sk_socket is missing. The data sits on the sk_receive_queue. Then once the socket is accepted if we never receive more data from the peer there will be no further sk_data_ready calls and all the data is still on the sk_receive_queue(). Then user calls recvmsg after accept() and for TCP sockets in sockmap we use the tcp_bpf_recvmsg_parser() handler. The handler checks for data in the sk_msg ingress queue expecting that the BPF program has already run from the sk_data_ready hook and enqueued the data as needed. So we are stuck. To fix do an unlikely check in recvmsg handler for data on the sk_receive_queue and if it exists wake up data_ready. We have the sock locked in both read_skb and recvmsg so should avoid having multiple runners. Fixes: 04919bed ("tcp: Introduce tcp_read_skb()") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20230523025618.113937-7-john.fastabend@gmail.com
-
John Fastabend authored
The sockmap code is returning EAGAIN after a FIN packet is received and no more data is on the receive queue. Correct behavior is to return 0 to the user and the user can then close the socket. The EAGAIN causes many apps to retry which masks the problem. Eventually the socket is evicted from the sockmap because its released from sockmap sock free handling. The issue creates a delay and can cause some errors on application side. To fix this check on sk_msg_recvmsg side if length is zero and FIN flag is set then set return to zero. A selftest will be added to check this condition. Fixes: 04919bed ("tcp: Introduce tcp_read_skb()") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: William Findlay <will@isovalent.com> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20230523025618.113937-6-john.fastabend@gmail.com
-
John Fastabend authored
We noticed some rare sk_buffs were stepping past the queue when system was under memory pressure. The general theory is to skip enqueueing sk_buffs when its not necessary which is the normal case with a system that is properly provisioned for the task, no memory pressure and enough cpu assigned. But, if we can't allocate memory due to an ENOMEM error when enqueueing the sk_buff into the sockmap receive queue we push it onto a delayed workqueue to retry later. When a new sk_buff is received we then check if that queue is empty. However, there is a problem with simply checking the queue length. When a sk_buff is being processed from the ingress queue but not yet on the sockmap msg receive queue its possible to also recv a sk_buff through normal path. It will check the ingress queue which is zero and then skip ahead of the pkt being processed. Previously we used sock lock from both contexts which made the problem harder to hit, but not impossible. To fix instead of popping the skb from the queue entirely we peek the skb from the queue and do the copy there. This ensures checks to the queue length are non-zero while skb is being processed. Then finally when the entire skb has been copied to user space queue or another socket we pop it off the queue. This way the queue length check allows bypassing the queue only after the list has been completely processed. To reproduce issue we run NGINX compliance test with sockmap running and observe some flakes in our testing that we attributed to this issue. Fixes: 04919bed ("tcp: Introduce tcp_read_skb()") Suggested-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: William Findlay <will@isovalent.com> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20230523025618.113937-5-john.fastabend@gmail.com
-
John Fastabend authored
Now that the backlog manages the reschedule() logic correctly we can drop the partial fix to reschedule from recvmsg hook. Rescheduling on recvmsg hook was added to address a corner case where we still had data in the backlog state but had nothing to kick it and reschedule the backlog worker to run and finish copying data out of the state. This had a couple limitations, first it required user space to kick it introducing an unnecessary EBUSY and retry. Second it only handled the ingress case and egress redirects would still be hung. With the correct fix, pushing the reschedule logic down to where the enomem error occurs we can drop this fix. Fixes: bec21719 ("skmsg: Schedule psock work if the cached skb exists on the psock") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20230523025618.113937-4-john.fastabend@gmail.com
-
John Fastabend authored
Sk_buffs are fed into sockmap verdict programs either from a strparser (when the user might want to decide how framing of skb is done by attaching another parser program) or directly through tcp_read_sock. The tcp_read_sock is the preferred method for performance when the BPF logic is a stream parser. The flow for Cilium's common use case with a stream parser is, tcp_read_sock() sk_psock_verdict_recv ret = bpf_prog_run_pin_on_cpu() sk_psock_verdict_apply(sock, skb, ret) // if system is under memory pressure or app is slow we may // need to queue skb. Do this queuing through ingress_skb and // then kick timer to wake up handler skb_queue_tail(ingress_skb, skb) schedule_work(work); The work queue is wired up to sk_psock_backlog(). This will then walk the ingress_skb skb list that holds our sk_buffs that could not be handled, but should be OK to run at some later point. However, its possible that the workqueue doing this work still hits an error when sending the skb. When this happens the skbuff is requeued on a temporary 'state' struct kept with the workqueue. This is necessary because its possible to partially send an skbuff before hitting an error and we need to know how and where to restart when the workqueue runs next. Now for the trouble, we don't rekick the workqueue. This can cause a stall where the skbuff we just cached on the state variable might never be sent. This happens when its the last packet in a flow and no further packets come along that would cause the system to kick the workqueue from that side. To fix we could do simple schedule_work(), but while under memory pressure it makes sense to back off some instead of continue to retry repeatedly. So instead to fix convert schedule_work to schedule_delayed_work and add backoff logic to reschedule from backlog queue on errors. Its not obvious though what a good backoff is so use '1'. To test we observed some flakes whil running NGINX compliance test with sockmap we attributed these failed test to this bug and subsequent issue. >From on list discussion. This commit bec21719("skmsg: Schedule psock work if the cached skb exists on the psock") was intended to address similar race, but had a couple cases it missed. Most obvious it only accounted for receiving traffic on the local socket so if redirecting into another socket we could still get an sk_buff stuck here. Next it missed the case where copied=0 in the recv() handler and then we wouldn't kick the scheduler. Also its sub-optimal to require userspace to kick the internal mechanisms of sockmap to wake it up and copy data to user. It results in an extra syscall and requires the app to actual handle the EAGAIN correctly. Fixes: 04919bed ("tcp: Introduce tcp_read_skb()") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: William Findlay <will@isovalent.com> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20230523025618.113937-3-john.fastabend@gmail.com
-
John Fastabend authored
The read_skb hook calls consume_skb() now, but this means that if the recv_actor program wants to use the skb it needs to inc the ref cnt so that the consume_skb() doesn't kfree the sk_buff. This is problematic because in some error cases under memory pressure we may need to linearize the sk_buff from sk_psock_skb_ingress_enqueue(). Then we get this, skb_linearize() __pskb_pull_tail() pskb_expand_head() BUG_ON(skb_shared(skb)) Because we incremented users refcnt from sk_psock_verdict_recv() we hit the bug on with refcnt > 1 and trip it. To fix lets simply pass ownership of the sk_buff through the skb_read call. Then we can drop the consume from read_skb handlers and assume the verdict recv does any required kfree. Bug found while testing in our CI which runs in VMs that hit memory constraints rather regularly. William tested TCP read_skb handlers. [ 106.536188] ------------[ cut here ]------------ [ 106.536197] kernel BUG at net/core/skbuff.c:1693! [ 106.536479] invalid opcode: 0000 [#1] PREEMPT SMP PTI [ 106.536726] CPU: 3 PID: 1495 Comm: curl Not tainted 5.19.0-rc5 #1 [ 106.537023] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ArchLinux 1.16.0-1 04/01/2014 [ 106.537467] RIP: 0010:pskb_expand_head+0x269/0x330 [ 106.538585] RSP: 0018:ffffc90000138b68 EFLAGS: 00010202 [ 106.538839] RAX: 000000000000003f RBX: ffff8881048940e8 RCX: 0000000000000a20 [ 106.539186] RDX: 0000000000000002 RSI: 0000000000000000 RDI: ffff8881048940e8 [ 106.539529] RBP: ffffc90000138be8 R08: 00000000e161fd1a R09: 0000000000000000 [ 106.539877] R10: 0000000000000018 R11: 0000000000000000 R12: ffff8881048940e8 [ 106.540222] R13: 0000000000000003 R14: 0000000000000000 R15: ffff8881048940e8 [ 106.540568] FS: 00007f277dde9f00(0000) GS:ffff88813bd80000(0000) knlGS:0000000000000000 [ 106.540954] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 106.541227] CR2: 00007f277eeede64 CR3: 000000000ad3e000 CR4: 00000000000006e0 [ 106.541569] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 106.541915] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 106.542255] Call Trace: [ 106.542383] <IRQ> [ 106.542487] __pskb_pull_tail+0x4b/0x3e0 [ 106.542681] skb_ensure_writable+0x85/0xa0 [ 106.542882] sk_skb_pull_data+0x18/0x20 [ 106.543084] bpf_prog_b517a65a242018b0_bpf_skskb_http_verdict+0x3a9/0x4aa9 [ 106.543536] ? migrate_disable+0x66/0x80 [ 106.543871] sk_psock_verdict_recv+0xe2/0x310 [ 106.544258] ? sk_psock_write_space+0x1f0/0x1f0 [ 106.544561] tcp_read_skb+0x7b/0x120 [ 106.544740] tcp_data_queue+0x904/0xee0 [ 106.544931] tcp_rcv_established+0x212/0x7c0 [ 106.545142] tcp_v4_do_rcv+0x174/0x2a0 [ 106.545326] tcp_v4_rcv+0xe70/0xf60 [ 106.545500] ip_protocol_deliver_rcu+0x48/0x290 [ 106.545744] ip_local_deliver_finish+0xa7/0x150 Fixes: 04919bed ("tcp: Introduce tcp_read_skb()") Reported-by: William Findlay <will@isovalent.com> Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: William Findlay <will@isovalent.com> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20230523025618.113937-2-john.fastabend@gmail.com
-
- 22 May, 2023 1 commit
-
-
Anton Protopopov authored
The LRU and LRU_PERCPU maps allocate a new element on update before locking the target hash table bucket. Right after that the maps try to lock the bucket. If this fails, then maps return -EBUSY to the caller without releasing the allocated element. This makes the element untracked: it doesn't belong to either of free lists, and it doesn't belong to the hash table, so can't be re-used; this eventually leads to the permanent -ENOMEM on LRU map updates, which is unexpected. Fix this by returning the element to the local free list if bucket locking fails. Fixes: 20b6cc34 ("bpf: Avoid hashtab deadlock with map_locked") Signed-off-by: Anton Protopopov <aspsk@isovalent.com> Link: https://lore.kernel.org/r/20230522154558.2166815-1-aspsk@isovalent.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
- 19 May, 2023 1 commit
-
-
Will Deacon authored
A narrow load from a 64-bit context field results in a 64-bit load followed potentially by a 64-bit right-shift and then a bitwise AND operation to extract the relevant data. In the case of a 32-bit access, an immediate mask of 0xffffffff is used to construct a 64-bit BPP_AND operation which then sign-extends the mask value and effectively acts as a glorified no-op. For example: 0: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0) results in the following code generation for a 64-bit field: ldr x7, [x7] // 64-bit load mov x10, #0xffffffffffffffff and x7, x7, x10 Fix the mask generation so that narrow loads always perform a 32-bit AND operation: ldr x7, [x7] // 64-bit load mov w10, #0xffffffff and w7, w7, w10 Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: John Fastabend <john.fastabend@gmail.com> Cc: Krzesimir Nowak <krzesimir@kinvolk.io> Cc: Andrey Ignatov <rdna@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Fixes: 31fd8581 ("bpf: permits narrower load from bpf program context fields") Signed-off-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230518102528.1341-1-will@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 16 May, 2023 1 commit
-
-
Andrii Nakryiko authored
__fallthrough is now not supported. Instead of renaming it to now-canonical ([0]) fallthrough pseudo-keyword, just get rid of it and equate 'h' case to default case, as both emit usage information and succeed. [0] https://www.kernel.org/doc/html/latest/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-throughSigned-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20230516001718.317177-1-andrii@kernel.org
-
- 15 May, 2023 1 commit
-
-
Jakub Kicinski authored
Some netdevices may get unregistered before late_initcall(), we have to move the hashtable init earlier. Fixes: f1fc43d0 ("bpf: Move offload initialization into late_initcall") Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217399Signed-off-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/r/20230505215836.491485-1-kuba@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 14 May, 2023 1 commit
-
-
Jeremy Sowden authored
When building sign-file, the call to get the CFLAGS for libcrypto is missing white-space between `pkg-config` and `--cflags`: $(shell $(HOSTPKG_CONFIG)--cflags libcrypto 2> /dev/null) Removing the redirection of stderr, we see: $ make -C tools/testing/selftests/bpf sign-file make: Entering directory '[...]/tools/testing/selftests/bpf' make: pkg-config--cflags: No such file or directory SIGN-FILE sign-file make: Leaving directory '[...]/tools/testing/selftests/bpf' Add the missing space. Fixes: fc975906 ("selftests/bpf: Add test for bpf_verify_pkcs7_signature() kfunc") Signed-off-by: Jeremy Sowden <jeremy@azazel.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Roberto Sassu <roberto.sassu@huawei.com> Link: https://lore.kernel.org/bpf/20230426215032.415792-1-jeremy@azazel.netSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 13 May, 2023 10 commits
-
-
David S. Miller authored
Hao Lan says: ==================== net: hns3: fix some bug for hns3 There are some bugfixes for the HNS3 ethernet driver. patch#1 fix miss checking for rx packet. patch#2 fixes VF promisc mode not update when mac table full bug, and patch#3 fixes a nterrupts not initialization in VF FLR bug. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jijie Shao authored
The timeout of the cmdq reset command has been increased to resolve the reset timeout issue in the full VF scenario. The timeout of other cmdq commands remains unchanged. Fixes: 8d307f8e ("net: hns3: create new set of unified hclge_comm_cmd_send APIs") Signed-off-by: Jijie Shao <shaojijie@huawei.com> Signed-off-by: Hao Lan <lanhao@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jie Wang authored
Currently the hns3 vf function reset delays 5000ms before vf rebuild process. In product applications, this delay is too long for application configurations and causes configuration timeout. According to the tests, 500ms delay is enough for reset process except PF FLR. So this patch modifies delay to 500ms in these scenarios. Fixes: 6988eb2a ("net: hns3: Add support to reset the enet/ring mgmt layer") Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Hao Lan <lanhao@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jijie Shao authored
To prevent the system from abnormally sending PFC frames after an abnormal reset. The hns3 driver notifies the firmware to disable pfc before reset. Fixes: 35d93a30 ("net: hns3: adjust the process of PF reset") Signed-off-by: Jijie Shao <shaojijie@huawei.com> Signed-off-by: Hao Lan <lanhao@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jie Wang authored
In function hns3_dump_tx_queue_info, The print buffer is not enough when the tx BD number is configured to 32760. As a result several BD information wouldn't be displayed. So fix it by increasing the tx queue print buffer length. Fixes: 630a6738 ("net: hns3: adjust string spaces of some parameters of tx bd info in debugfs") Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Hao Lan <lanhao@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Alexis Lothoré says: ==================== net: dsa: rzn1-a5psw: fix STP states handling This small series fixes STP support and while adding a new function to enable/disable learning, use that to disable learning on standalone ports at switch setup as reported by Vladimir Oltean. This series was initially submitted on net-next by Clement Leger, but some career evolutions has made him hand me over those topics. Also, this new revision is submitted on net instead of net-next for V1 based on Vladimir Oltean's suggestion Changes since v2: - fix commit split by moving A5PSW_MGMT_CFG_ENABLE in relevant commit - fix reverse christmas tree ordering in a5psw_port_stp_state_set Changes since v1: - fix typos in commit messages and doc - re-split STP states handling commit - add Fixes: tag and new Signed-off-by - submit series as fix on net instead of net-next - split learning and blocking setting functions - remove unused define A5PSW_PORT_ENA_TX_SHIFT - add boolean for tx/rx enabled for clarity ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Clément Léger authored
When ports are in standalone mode, they should have learning disabled to avoid adding new entries in the MAC lookup table which might be used by other bridge ports to forward packets. While adding that, also make sure learning is enabled for CPU port. Fixes: 888cdb89 ("net: dsa: rzn1-a5psw: add Renesas RZ/N1 advanced 5 port switch driver") Signed-off-by: Clément Léger <clement.leger@bootlin.com> Signed-off-by: Alexis Lothoré <alexis.lothore@bootlin.com> Reviewed-by: Piotr Raczynski <piotr.raczynski@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexis Lothoré authored
stp_set_state() should actually allow receiving BPDU while in LEARNING mode which is not the case. Additionally, the BLOCKEN bit does not actually forbid sending forwarded frames from that port. To fix this, add a5psw_port_tx_enable() function which allows to disable TX. However, while its name suggest that TX is totally disabled, it is not and can still allow to send BPDUs even if disabled. This can be done by using forced forwarding with the switch tagging mechanism but keeping "filtering" disabled (which is already the case in the rzn1-a5sw tag driver). With these fixes, STP support is now functional. Fixes: 888cdb89 ("net: dsa: rzn1-a5psw: add Renesas RZ/N1 advanced 5 port switch driver") Signed-off-by: Clément Léger <clement.leger@bootlin.com> Signed-off-by: Alexis Lothoré <alexis.lothore@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Clément Léger authored
Currently, management frame were discarded before reaching the CPU port due to a misconfiguration of the MGMT_CONFIG register. Enable them by setting the correct value in this register in order to correctly receive management frame and handle STP. Fixes: 888cdb89 ("net: dsa: rzn1-a5psw: add Renesas RZ/N1 advanced 5 port switch driver") Signed-off-by: Clément Léger <clement.leger@bootlin.com> Signed-off-by: Alexis Lothoré <alexis.lothore@bootlin.com> Reviewed-by: Piotr Raczynski <piotr.raczynski@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Xin Long authored
In commit 20704bd1 ("erspan: build the header with the right proto according to erspan_ver"), it gets the proto with t->parms.erspan_ver, but t->parms.erspan_ver is not used by collect_md branch, and instead it should get the proto with md->version for collect_md. Thanks to Kevin for pointing this out. Fixes: 20704bd1 ("erspan: build the header with the right proto according to erspan_ver") Fixes: 94d7d8f2 ("ip6_gre: add erspan v2 support") Reported-by: Kevin Traynor <ktraynor@redhat.com> Signed-off-by: Xin Long <lucien.xin@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: William Tu <u9012063@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 12 May, 2023 15 commits
-
-
Eric Dumazet authored
When tcp_v4_send_reset() is called with @sk == NULL, we do not change ctl_sk->sk_priority, which could have been set from a prior invocation. Change tcp_v4_send_reset() to set sk_priority and sk_mark fields before calling ip_send_unicast_reply(). This means tcp_v4_send_reset() and tcp_v4_send_ack() no longer have to clear ctl_sk->sk_mark after their call to ip_send_unicast_reply(). Fixes: f6c0f5d2 ("tcp: honor SO_PRIORITY in TIME_WAIT state") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Antoine Tenart <atenart@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Zhuang Shengen authored
When client and server establish a connection through vsock, the client send a request to the server to initiate the connection, then start a timer to wait for the server's response. When the server's RESPONSE message arrives, the timer also times out and exits. The server's RESPONSE message is processed first, and the connection is established. However, the client's timer also times out, the original processing logic of the client is to directly set the state of this vsock to CLOSE and return ETIMEDOUT. It will not notify the server when the port is released, causing the server port remain. when client's vsock_connect timeout,it should check sk state is ESTABLISHED or not. if sk state is ESTABLISHED, it means the connection is established, the client should not set the sk state to CLOSE Note: I encountered this issue on kernel-4.18, which can be fixed by this patch. Then I checked the latest code in the community and found similar issue. Fixes: d021c344 ("VSOCK: Introduce VM Sockets") Signed-off-by: Zhuang Shengen <zhuangshengen@huawei.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pieter Jansen van Vuuren authored
By default we would not want RXFCS and RXALL features enabled as they are mainly intended for debugging purposes. This does not stop users from enabling them later on as needed. Fixes: 8e57daf7 ("sfc_ef100: RX path for EF100") Signed-off-by: Pieter Jansen van Vuuren <pieter.jansen-van-vuuren@amd.com> Co-developed-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Martin Habets <habetsm.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jan Sokolowski authored
As not all ICE_TX_FLAGS_* fit in current 16-bit limited tx_flags field that was introduced in the Fixes commit, VLAN-related information would be discarded completely. As such, creating a vlan and trying to run ping through would result in no traffic passing. Fix that by refactoring tx_flags variable into flags only and a separate variable that holds VLAN ID. As there is some space left, type variable can fit between those two. Pahole reports no size change to ice_tx_buf struct. Fixes: aa1d3faf ("ice: Robustify cleaning/completing XDP Tx buffers") Signed-off-by: Jan Sokolowski <jan.sokolowski@intel.com> Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
It seems that we mostly get netdev CCed on wireless patches which are written by people who don't know any better and CC everything that get_maintainers spits out. Rather than patches which indeed could benefit from general networking review. Marking them down in patchwork as Awaiting Upstream is a bit tedious. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Johannes Berg <johannes@sipsolutions.net> Acked-by: Kalle Valo <kvalo@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Huayu Chen authored
The patch corrects the NFP_NET_MAX_DSCP definition in the main.h file. The incorrect definition result DSCP bits not being mapped properly when DCB is set. When NFP_NET_MAX_DSCP was defined as 4, the next 60 DSCP bits failed to be set. Fixes: 9b7fe804 ("nfp: add DCB IEEE support") Cc: stable@vger.kernel.org Signed-off-by: Huayu Chen <huayu.chen@corigine.com> Acked-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Louis Peens <louis.peens@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Documentation/netlink/ contains machine-readable protocol specs in YAML. Those are much like device tree bindings, no point CCing docs@ for the changes. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Simon Horman <simon.horman@corigine.com>
-
Marcelo Ricardo Leitner authored
Neil moved away from SCTP related duties. Move him to CREDITS then and while at it, update SCTP project website. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Acked-by: Xin Long <lucien.xin@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Grygorii Strashko authored
Introduce the W/A for packet errors seen with short cables (<1m) between two DP83867 PHYs. The W/A recommended by DM requires FFE Equalizer Configuration tuning by writing value 0x0E81 to DSP_FFE_CFG register (0x012C), surrounded by hard and soft resets as follows: write_reg(0x001F, 0x8000); //hard reset write_reg(DSP_FFE_CFG, 0x0E81); write_reg(0x001F, 0x4000); //soft reset Since DP83867 PHY DM says "Changing this register to 0x0E81, will not affect Long Cable performance.", enable the W/A by default. Fixes: 2a10154a ("net: phy: dp83867: Add TI dp83867 phy") Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Uwe Kleine-König authored
In the (unlikely) event that pm_runtime_get() (disguised as pm_runtime_resume_and_get()) fails, the remove callback returned an error early. The problem with this is that the driver core ignores the error value and continues removing the device. This results in a resource leak. Worse the devm allocated resources are freed and so if a callback of the driver is called later the register mapping is already gone which probably results in a crash. Fixes: a31eda65 ("net: fec: fix clock count mis-match") Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20230510200020.1534610-1-u.kleine-koenig@pengutronix.deSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
After blamed commit, nexthop_fib6_nh_bh() and nexthop_fib6_nh() are the same. Delete nexthop_fib6_nh_bh(), and convert /proc/net/ipv6_route to standard rcu to avoid this splat: [ 5723.180080] WARNING: suspicious RCU usage [ 5723.180083] ----------------------------- [ 5723.180084] include/net/nexthop.h:516 suspicious rcu_dereference_check() usage! [ 5723.180086] other info that might help us debug this: [ 5723.180087] rcu_scheduler_active = 2, debug_locks = 1 [ 5723.180089] 2 locks held by cat/55856: [ 5723.180091] #0: ffff9440a582afa8 (&p->lock){+.+.}-{3:3}, at: seq_read_iter (fs/seq_file.c:188) [ 5723.180100] #1: ffffffffaac07040 (rcu_read_lock_bh){....}-{1:2}, at: rcu_lock_acquire (include/linux/rcupdate.h:326) [ 5723.180109] stack backtrace: [ 5723.180111] CPU: 14 PID: 55856 Comm: cat Tainted: G S I 6.3.0-dbx-DEV #528 [ 5723.180115] Call Trace: [ 5723.180117] <TASK> [ 5723.180119] dump_stack_lvl (lib/dump_stack.c:107) [ 5723.180124] dump_stack (lib/dump_stack.c:114) [ 5723.180126] lockdep_rcu_suspicious (include/linux/context_tracking.h:122) [ 5723.180132] ipv6_route_seq_show (include/net/nexthop.h:?) [ 5723.180135] ? ipv6_route_seq_next (net/ipv6/ip6_fib.c:2605) [ 5723.180140] seq_read_iter (fs/seq_file.c:272) [ 5723.180145] seq_read (fs/seq_file.c:163) [ 5723.180151] proc_reg_read (fs/proc/inode.c:316 fs/proc/inode.c:328) [ 5723.180155] vfs_read (fs/read_write.c:468) [ 5723.180160] ? up_read (kernel/locking/rwsem.c:1617) [ 5723.180164] ksys_read (fs/read_write.c:613) [ 5723.180168] __x64_sys_read (fs/read_write.c:621) [ 5723.180170] do_syscall_64 (arch/x86/entry/common.c:?) [ 5723.180174] entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:120) [ 5723.180177] RIP: 0033:0x7fa455677d2a Fixes: 09eed119 ("neighbour: switch to standard rcu, instead of rcu_bh") Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/r/20230510154646.370659-1-edumazet@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
The commit 565b4824 ("devlink: change port event netdev notifier from per-net to global") changed original per-net notifier to be per-devlink instance. That fixed the issue of non-receiving events of netdev uninit if that moved to a different namespace. That worked fine in -net tree. However, later on when commit ee75f1fc ("net/mlx5e: Create separate devlink instance for ethernet auxiliary device") and commit 72ed5d56 ("net/mlx5: Suspend auxiliary devices only in case of PCI device suspend") were merged, a deadlock was introduced when removing a namespace with devlink instance with another nested instance. Here there is the bad flow example resulting in deadlock with mlx5: net_cleanup_work -> cleanup_net (takes down_read(&pernet_ops_rwsem) -> devlink_pernet_pre_exit() -> devlink_reload() -> mlx5_devlink_reload_down() -> mlx5_unload_one_devl_locked() -> mlx5_detach_device() -> del_adev() -> mlx5e_remove() -> mlx5e_destroy_devlink() -> devlink_free() -> unregister_netdevice_notifier() (takes down_write(&pernet_ops_rwsem) Steps to reproduce: $ modprobe mlx5_core $ ip netns add ns1 $ devlink dev reload pci/0000:08:00.0 netns ns1 $ ip netns del ns1 Resolve this by converting the notifier from per-devlink instance to a static one registered during init phase and leaving it registered forever. Use this notifier for all devlink port instances created later on. Note what a tree needs this fix only in case all of the cited fixes commits are present. Reported-by: Moshe Shemesh <moshe@nvidia.com> Fixes: 565b4824 ("devlink: change port event netdev notifier from per-net to global") Fixes: ee75f1fc ("net/mlx5e: Create separate devlink instance for ethernet auxiliary device") Fixes: 72ed5d56 ("net/mlx5: Suspend auxiliary devices only in case of PCI device suspend") Signed-off-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20230510144621.932017-1-jiri@resnulli.usSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Andrea Mayer says: ==================== selftests: seg6: make srv6_end_dt4_l3vpn_test more robust This pachset aims to improve and make more robust the selftests performed to check whether SRv6 End.DT4 beahvior works as expected under different system configurations. Some Linux distributions enable Deduplication Address Detection and Reverse Path Filtering mechanisms by default which can interfere with SRv6 End.DT4 behavior and cause selftests to fail. The following patches improve selftests for End.DT4 by taking these two mechanisms into account. Specifically: - patch 1/2: selftests: seg6: disable DAD on IPv6 router cfg for srv6_end_dt4_l3vpn_test - patch 2/2: selftets: seg6: disable rp_filter by default in srv6_end_dt4_l3vpn_test ==================== Link: https://lore.kernel.org/r/20230510111638.12408-1-andrea.mayer@uniroma2.itSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andrea Mayer authored
On some distributions, the rp_filter is automatically set (=1) by default on a netdev basis (also on VRFs). In an SRv6 End.DT4 behavior, decapsulated IPv4 packets are routed using the table associated with the VRF bound to that tunnel. During lookup operations, the rp_filter can lead to packet loss when activated on the VRF. Therefore, we chose to make this selftest more robust by explicitly disabling the rp_filter during tests (as it is automatically set by some Linux distributions). Fixes: 2195444e ("selftests: add selftest for the SRv6 End.DT4 behavior") Reported-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Andrea Mayer <andrea.mayer@uniroma2.it> Tested-by: Hangbin Liu <liuhangbin@gmail.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andrea Mayer authored
The srv6_end_dt4_l3vpn_test instantiates a virtual network consisting of several routers (rt-1, rt-2) and hosts. When the IPv6 addresses of rt-{1,2} routers are configured, the Deduplicate Address Detection (DAD) kicks in when enabled in the Linux distros running the selftests. DAD is used to check whether an IPv6 address is already assigned in a network. Such a mechanism consists of sending an ICMPv6 Echo Request and waiting for a reply. As the DAD process could take too long to complete, it may cause the failing of some tests carried out by the srv6_end_dt4_l3vpn_test script. To make the srv6_end_dt4_l3vpn_test more robust, we disable DAD on routers since we configure the virtual network manually and do not need any address deduplication mechanism at all. Fixes: 2195444e ("selftests: add selftest for the SRv6 End.DT4 behavior") Signed-off-by: Andrea Mayer <andrea.mayer@uniroma2.it> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 11 May, 2023 4 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netLinus Torvalds authored
Pull networking fixes from Paolo Abeni: "Including fixes from netfilter. Current release - regressions: - mtk_eth_soc: fix NULL pointer dereference Previous releases - regressions: - core: - skb_partial_csum_set() fix against transport header magic value - fix load-tearing on sk->sk_stamp in sock_recv_cmsgs(). - annotate sk->sk_err write from do_recvmmsg() - add vlan_get_protocol_and_depth() helper - netlink: annotate accesses to nlk->cb_running - netfilter: always release netdev hooks from notifier Previous releases - always broken: - core: deal with most data-races in sk_wait_event() - netfilter: fix possible bug_on with enable_hooks=1 - eth: bonding: fix send_peer_notif overflow - eth: xpcs: fix incorrect number of interfaces - eth: ipvlan: fix out-of-bounds caused by unclear skb->cb - eth: stmmac: Initialize MAC_ONEUS_TIC_COUNTER register" * tag 'net-6.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (31 commits) af_unix: Fix data races around sk->sk_shutdown. af_unix: Fix a data race of sk->sk_receive_queue->qlen. net: datagram: fix data-races in datagram_poll() net: mscc: ocelot: fix stat counter register values ipvlan:Fix out-of-bounds caused by unclear skb->cb docs: networking: fix x25-iface.rst heading & index order gve: Remove the code of clearing PBA bit tcp: add annotations around sk->sk_shutdown accesses net: add vlan_get_protocol_and_depth() helper net: pcs: xpcs: fix incorrect number of interfaces net: deal with most data-races in sk_wait_event() net: annotate sk->sk_err write from do_recvmmsg() netlink: annotate accesses to nlk->cb_running kselftest: bonding: add num_grat_arp test selftests: forwarding: lib: add netns support for tc rule handle stats get Documentation: bonding: fix the doc of peer_notif_delay bonding: fix send_peer_notif overflow net: ethernet: mtk_eth_soc: fix NULL pointer dereference selftests: nft_flowtable.sh: check ingress/egress chain too selftests: nft_flowtable.sh: monitor result file sizes ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-mediaLinus Torvalds authored
Pull media fixes from Mauro Carvalho Chehab: - fix some unused-variable warning in mtk-mdp3 - ignore unused suspend operations in nxp - some driver fixes in rcar-vin * tag 'media/v6.4-2' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media: media: platform: mtk-mdp3: work around unused-variable warning media: nxp: ignore unused suspend operations media: rcar-vin: Select correct interrupt mode for V4L2_FIELD_ALTERNATE media: rcar-vin: Fix NV12 size alignment media: rcar-vin: Gen3 can not scale NV12
-
git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nfJakub Kicinski authored
Pablo Neira Ayuso says: ==================== Netfilter updates for net The following patchset contains Netfilter fixes for net: 1) Fix UAF when releasing netnamespace, from Florian Westphal. 2) Fix possible BUG_ON when nf_conntrack is enabled with enable_hooks, from Florian Westphal. 3) Fixes for nft_flowtable.sh selftest, from Boris Sukholitko. 4) Extend nft_flowtable.sh selftest to cover integration with ingress/egress hooks, from Florian Westphal. * tag 'nf-23-05-10' of git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf: selftests: nft_flowtable.sh: check ingress/egress chain too selftests: nft_flowtable.sh: monitor result file sizes selftests: nft_flowtable.sh: wait for specific nc pids selftests: nft_flowtable.sh: no need for ps -x option selftests: nft_flowtable.sh: use /proc for pid checking netfilter: conntrack: fix possible bug_on with enable_hooks=1 netfilter: nf_tables: always release netdev hooks from notifier ==================== Link: https://lore.kernel.org/r/20230510083313.152961-1-pablo@netfilter.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Kuniyuki Iwashima says: ==================== af_unix: Fix two data races reported by KCSAN. KCSAN reported data races around these two fields for AF_UNIX sockets. * sk->sk_receive_queue->qlen * sk->sk_shutdown Let's annotate them properly. ==================== Link: https://lore.kernel.org/r/20230510003456.42357-1-kuniyu@amazon.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-