- 25 Jun, 2020 30 commits
-
-
Toke Høiland-Jørgensen authored
I spotted a few nits when comparing the in-tree version of sch_cake with the out-of-tree one: A redundant error variable declaration shadowing an outer declaration, and an indentation alignment issue. Fix both of these. Fixes: 046f6fd5 ("sched: Add Common Applications Kept Enhanced (cake) qdisc") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Toke Høiland-Jørgensen authored
As a further optimisation of the diffserv parsing codepath, we can skip it entirely if CAKE is configured to neither use diffserv-based classification, nor to zero out the diffserv bits. Fixes: c87b4ecd ("sch_cake: Make sure we can write the IP header before changing DSCP bits") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ilya Ponetayev authored
cake_handle_diffserv() tries to linearize mac and network header parts of skb and to make it writable unconditionally. In some cases it leads to full skb reallocation, which reduces throughput and increases CPU load. Some measurements of IPv4 forward + NAPT on MIPS router with 580 MHz single-core CPU was conducted. It appears that on kernel 4.9 skb_try_make_writable() reallocates skb, if skb was allocated in ethernet driver via so-called 'build skb' method from page cache (it was discovered by strange increase of kmalloc-2048 slab at first). Obtain DSCP value via read-only skb_header_pointer() call, and leave linearization only for DSCP bleaching or ECN CE setting. And, as an additional optimisation, skip diffserv parsing entirely if it is not needed by the current configuration. Fixes: c87b4ecd ("sch_cake: Make sure we can write the IP header before changing DSCP bits") Signed-off-by: Ilya Ponetayev <i.ponetaev@ndmsystems.com> [ fix a few style issues, reflow commit message ] Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michal Kubecek authored
When getting SQI or maximum SQI value fails in linkstate_prepare_data(), we must not return without calling ethnl_ops_complete(dev) as that could result in imbalance between ethtool_ops ->begin() and ->complete() calls. Fixes: 80660219 ("ethtool: provide UAPI for PHY Signal Quality Index (SQI)") Signed-off-by: Michal Kubecek <mkubecek@suse.cz> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jason A. Donenfeld says: ==================== napi_gro_receive caller return value cleanups In 6570bc79 ("net: core: use listified Rx for GRO_NORMAL in napi_gro_receive()"), the GRO_NORMAL case stopped calling netif_receive_skb_internal, checking its return value, and returning GRO_DROP in case it failed. Instead, it calls into netif_receive_skb_list_internal (after a bit of indirection), which doesn't return any error. Therefore, napi_gro_receive will never return GRO_DROP, making handling GRO_DROP dead code. I emailed the author of 6570bc79 on netdev [1] to see if this change was intentional, but the dlink.ru email address has been disconnected, and looking a bit further myself, it seems somewhat infeasible to start propagating return values backwards from the internal machinations of netif_receive_skb_list_internal. Taking a look at all the callers of napi_gro_receive, it appears that three are checking the return value for the purpose of comparing it to the now never-happening GRO_DROP, and one just casts it to (void), a likely historical leftover. Every other of the 120 callers does not bother checking the return value. And it seems like these remaining 116 callers are doing the right thing: after calling napi_gro_receive, the packet is now in the hands of the upper layers of the newtworking, and the device driver itself has no business now making decisions based on what the upper layers choose to do. Incrementing stats counters on GRO_DROP seems like a mistake, made by these three drivers, but not by the remaining 117. It would seem, therefore, that after rectifying these four callers of napi_gro_receive, that I should go ahead and just remove returning the value from napi_gro_receive all together. However, napi_gro_receive has a function event tracer, and being able to introspect into the networking stack to see how often napi_gro_receive is returning whatever interesting GRO status (aside from _DROP) remains an interesting data point worth keeping for debugging. So, this series simply gets rid of the return value checking for the four useless places where that check never evaluates to anything meaningful. [1] https://lore.kernel.org/netdev/20200624210606.GA1362687@zx2c4.com/ ==================== Acked-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jason A. Donenfeld authored
The napi_gro_receive function no longer returns GRO_DROP ever, making handling GRO_DROP dead code. This commit removes that dead code. Further, it's not even clear that device drivers have any business in taking action after passing off received packets; that's arguably out of their hands. In this case, too, the non-gro path didn't bother checking the return value. Plus, this had some clunky debugging functions that duplicated code from elsewhere and was generally pretty messy. So, this commit cleans that all up too. Fixes: 6570bc79 ("net: core: use listified Rx for GRO_NORMAL in napi_gro_receive()") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jason A. Donenfeld authored
Basically no drivers care about the return value here, and there's no __must_check that would make casting to void sensible, so remove it. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jason A. Donenfeld authored
The napi_gro_receive function no longer returns GRO_DROP ever, making handling GRO_DROP dead code. This commit removes that dead code. Further, it's not even clear that device drivers have any business in taking action after passing off received packets; that's arguably out of their hands. Fixes: 6570bc79 ("net: core: use listified Rx for GRO_NORMAL in napi_gro_receive()") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jason A. Donenfeld authored
The napi_gro_receive function no longer returns GRO_DROP ever, making handling GRO_DROP dead code. This commit removes that dead code. Further, it's not even clear that device drivers have any business in taking action after passing off received packets; that's arguably out of their hands. Fixes: e7096c13 ("net: WireGuard secure network tunnel") Fixes: 6570bc79 ("net: core: use listified Rx for GRO_NORMAL in napi_gro_receive()") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Roopa Prabhu authored
This patch fixes last saved fdb index in fdb dump handler when handling fdb's with nhid. Fixes: 1274e1cc ("vxlan: ecmp support for mac fdb entries") Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
If a socket is set ipv6only, it will still send IPv4 addresses in the INIT and INIT_ACK packets. This potentially misleads the peer into using them, which then would cause association termination. The fix is to not add IPv4 addresses to ipv6only sockets. Fixes: 1da177e4 ("Linux-2.6.12-rc2") Reported-by: Corey Minyard <cminyard@mvista.com> Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Tested-by: Corey Minyard <cminyard@mvista.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Briana Oursler authored
Update odd length cookie hexstrings in csum.json, tunnel_key.json and bpf.json to be even length to comply with check enforced in commit 0149dabf2a1b ("tc: m_actions: check cookie hexstring len") in iproute2. Signed-off-by: Briana Oursler <briana.oursler@gmail.com> Reviewed-by: Stefano Brivio <sbrivio@redhat.com> Reviewed-by: Davide Caratti <dcaratti@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Neal Cardwell says: ==================== tcp_cubic: fix spurious HYSTART_DELAY on RTT decrease This series fixes a long-standing bug in the TCP CUBIC HYSTART_DELAY mechanim recently reported by Mirja Kuehlewind. The code can cause a spurious exit of slow start in some particular cases: upon an RTT decrease that happens on the 9th or later ACK in a round trip. This series fixes the original Hystart code and also the recent BPF implementation. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Neal Cardwell authored
Apply the fix from: "tcp_cubic: fix spurious HYSTART_DELAY exit upon drop in min RTT" to the BPF implementation of TCP CUBIC congestion control. Repeating the commit description here for completeness: Mirja Kuehlewind reported a bug in Linux TCP CUBIC Hystart, where Hystart HYSTART_DELAY mechanism can exit Slow Start spuriously on an ACK when the minimum rtt of a connection goes down. From inspection it is clear from the existing code that this could happen in an example like the following: o The first 8 RTT samples in a round trip are 150ms, resulting in a curr_rtt of 150ms and a delay_min of 150ms. o The 9th RTT sample is 100ms. The curr_rtt does not change after the first 8 samples, so curr_rtt remains 150ms. But delay_min can be lowered at any time, so delay_min falls to 100ms. The code executes the HYSTART_DELAY comparison between curr_rtt of 150ms and delay_min of 100ms, and the curr_rtt is declared far enough above delay_min to force a (spurious) exit of Slow start. The fix here is simple: allow every RTT sample in a round trip to lower the curr_rtt. Fixes: 6de4a9c4 ("bpf: tcp: Add bpf_cubic example") Reported-by: Mirja Kuehlewind <mirja.kuehlewind@ericsson.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Neal Cardwell authored
Mirja Kuehlewind reported a bug in Linux TCP CUBIC Hystart, where Hystart HYSTART_DELAY mechanism can exit Slow Start spuriously on an ACK when the minimum rtt of a connection goes down. From inspection it is clear from the existing code that this could happen in an example like the following: o The first 8 RTT samples in a round trip are 150ms, resulting in a curr_rtt of 150ms and a delay_min of 150ms. o The 9th RTT sample is 100ms. The curr_rtt does not change after the first 8 samples, so curr_rtt remains 150ms. But delay_min can be lowered at any time, so delay_min falls to 100ms. The code executes the HYSTART_DELAY comparison between curr_rtt of 150ms and delay_min of 100ms, and the curr_rtt is declared far enough above delay_min to force a (spurious) exit of Slow start. The fix here is simple: allow every RTT sample in a round trip to lower the curr_rtt. Fixes: ae27e98a ("[TCP] CUBIC v2.3") Reported-by: Mirja Kuehlewind <mirja.kuehlewind@ericsson.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Vladimir Oltean says: ==================== Fixes for SJA1105 DSA tc-gate action This small series fixes 2 bugs in the tc-gate implementation: 1. The TAS state machine keeps getting rescheduled even after removing tc-gate actions on all ports. 2. tc-gate actions with only one gate control list entry are installed to hardware with an incorrect interval of zero, which makes the switch erroneously drop those packets (since the configuration is invalid). To keep the code palatable, a forward-declaration was avoided by moving some code around in patch 1/4. I hope that isn't too much of an issue. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
The sja1105_gating_cfg_time_to_interval function does this, as per the comments: /* The gate entries contain absolute times in their e->interval field. Convert * that to proper intervals (i.e. "0, 5, 10, 15" to "5, 5, 5, 5"). */ To perform that task, it iterates over gating_cfg->entries, at each step updating the interval of the _previous_ entry. So one interval remains to be updated at the end of the loop: the last one (since it isn't "prev" for anyone else). But there was an erroneous check, that the last element's interval should not be updated if it's also the only element. I'm not quite sure why that check was there, but it's clearly incorrect, as a tc-gate schedule with a single element would get an e->interval of zero, regardless of the duration requested by the user. The switch wouldn't even consider this configuration as valid: it will just drop all traffic that matches the rule. Fixes: 834f8933 ("net: dsa: sja1105: implement tc-gate using time-triggered virtual links") Reported-by: Xiaoliang Yang <xiaoliang.yang_1@nxp.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
Currently, tas_data->enabled would remain true even after deleting all tc-gate rules from the switch ports, which would cause the sja1105_tas_state_machine to get unnecessarily scheduled. Also, if there were any errors which would prevent the hardware from enabling the gating schedule, the sja1105_tas_state_machine would continuously detect and print that, spamming the kernel log, even if the rules were subsequently deleted. The rules themselves are _not_ active, because sja1105_init_scheduling does enough of a job to not install the gating schedule in the static config. But the virtual link rules themselves are still present. So call the functions that remove the tc-gate configuration from priv->tas_data.gating_cfg, so that tas_data->enabled can be set to false, and sja1105_tas_state_machine will stop from being scheduled. Fixes: 834f8933 ("net: dsa: sja1105: implement tc-gate using time-triggered virtual links") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
Currently sja1105_compose_gating_subschedule is not prepared to be called for the case where we want to recompute the global tc-gate configuration after we've deleted those actions on a port. After deleting the tc-gate actions on the last port, max_cycle_time would become zero, and that would incorrectly prevent sja1105_free_gating_config from getting called. So move the freeing function above the check for the need to apply a new configuration. Fixes: 834f8933 ("net: dsa: sja1105: implement tc-gate using time-triggered virtual links") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
It turns out that sja1105_compose_gating_subschedule must also be called from sja1105_vl_delete, to recalculate the overall tc-gate configuration. Currently this is not possible without introducing a forward declaration. So move the function at the top of the file, along with its dependencies. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Claudiu Beznea authored
DMA buffers were not freed on failure path of at91ether_open(). Along with changes for freeing the DMA buffers the enable/disable interrupt instructions were moved to at91ether_start()/at91ether_stop() functions and the operations on at91ether_stop() were done in their reverse order (compared with how is done in at91ether_start()): before this patch the operation order on interface open path was as follows: 1/ alloc DMA buffers 2/ enable tx, rx 3/ enable interrupts and the order on interface close path was as follows: 1/ disable tx, rx 2/ disable interrupts 3/ free dma buffers. Fixes: 7897b071 ("net: macb: convert to phylink") Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Claudiu Beznea authored
Call pm_runtime_put_sync() on failure path of at91ether_open. Fixes: e6a41c23 ("net: macb: ensure interface is not suspended on at91rm9200") Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nfDavid S. Miller authored
Pablo Neira Ayuso says: ==================== Netfilter fixes for net The following patchset contains Netfilter fixes for net, they are: 1) Unaligned atomic access in ipset, from Russell King. 2) Missing module description, from Rob Gill. 3) Patches to fix a module unload causing NULL pointer dereference in xtables, from David Wilder. For the record, I posting here his cover letter explaining the problem: A crash happened on ppc64le when running ltp network tests triggered by "rmmod iptable_mangle". See previous discussion in this thread: https://lists.openwall.net/netdev/2020/06/03/161 . In the crash I found in iptable_mangle_hook() that state->net->ipv4.iptable_mangle=NULL causing a NULL pointer dereference. net->ipv4.iptable_mangle is set to NULL in +iptable_mangle_net_exit() and called when ip_mangle modules is unloaded. A rmmod task was found running in the crash dump. A 2nd crash showed the same problem when running "rmmod iptable_filter" (net->ipv4.iptable_filter=NULL). To fix this I added .pre_exit hook in all iptable_foo.c. The pre_exit will un-register the underlying hook and exit would do the table freeing. The netns core does an unconditional +synchronize_rcu after the pre_exit hooks insuring no packets are in flight that have picked up the pointer before completing the un-register. These patches include changes for both iptables and ip6tables. We tested this fix with ltp running iptables01.sh and iptables01.sh -6 a loop for 72 hours. 4) Add a selftest for conntrack helper assignment, from Florian Westphal. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Martitz authored
The eth_addr member is passed to ether_addr functions that require 2-byte alignment, therefore the member must be properly aligned to avoid unaligned accesses. The problem is in place since the initial merge of multicast to unicast: commit 6db6f0ea bridge: multicast to unicast Fixes: 6db6f0ea ("bridge: multicast to unicast") Cc: Roopa Prabhu <roopa@cumulusnetworks.com> Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Cc: David S. Miller <davem@davemloft.net> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Felix Fietkau <nbd@nbd.name> Signed-off-by: Thomas Martitz <t.martitz@avm.de> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Denis Kirjanov authored
there is a problem with the CWR flag set in an incoming ACK segment and it leads to the situation when the ECE flag is latched forever the following packetdrill script shows what happens: // Stack receives incoming segments with CE set +0.1 <[ect0] . 11001:12001(1000) ack 1001 win 65535 +0.0 <[ce] . 12001:13001(1000) ack 1001 win 65535 +0.0 <[ect0] P. 13001:14001(1000) ack 1001 win 65535 // Stack repsonds with ECN ECHO +0.0 >[noecn] . 1001:1001(0) ack 12001 +0.0 >[noecn] E. 1001:1001(0) ack 13001 +0.0 >[noecn] E. 1001:1001(0) ack 14001 // Write a packet +0.1 write(3, ..., 1000) = 1000 +0.0 >[ect0] PE. 1001:2001(1000) ack 14001 // Pure ACK received +0.01 <[noecn] W. 14001:14001(0) ack 2001 win 65535 // Since CWR was sent, this packet should NOT have ECE set +0.1 write(3, ..., 1000) = 1000 +0.0 >[ect0] P. 2001:3001(1000) ack 14001 // but Linux will still keep ECE latched here, with packetdrill // flagging a missing ECE flag, expecting // >[ect0] PE. 2001:3001(1000) ack 14001 // in the script In the situation above we will continue to send ECN ECHO packets and trigger the peer to reduce the congestion window. To avoid that we can check CWR on pure ACKs received. v3: - Add a sequence check to avoid sending an ACK to an ACK v2: - Adjusted the comment - move CWR check before checking for unacknowledged packets Signed-off-by: Denis Kirjanov <denis.kirjanov@suse.com> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ard Biesheuvel authored
The skcipher API dynamically instantiates the transformation object on request that implements the requested algorithm optimally on the given platform. This notion of optimality only matters for cases like bulk network or disk encryption, where performance can be a bottleneck, or in cases where the algorithm itself is not known at compile time. In the mscc case, we are dealing with AES encryption of a single block, and so neither concern applies, and we are better off using the AES library interface, which is lightweight and safe for this kind of use. Note that the scatterlist API does not permit references to buffers that are located on the stack, so the existing code is incorrect in any case, but avoiding the skcipher and scatterlist APIs entirely is the most straight-forward approach to fixing this. Cc: Antoine Tenart <antoine.tenart@bootlin.com> Cc: Andrew Lunn <andrew@lunn.ch> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Heiner Kallweit <hkallweit1@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jakub Kicinski <kuba@kernel.org> Fixes: 28c5107a ("net: phy: mscc: macsec support") Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Doug Berger says: ==================== net: bcmgenet: use hardware padding of runt frames Now that scatter-gather and tx-checksumming are enabled by default it revealed a packet corruption issue that can occur for very short fragmented packets. When padding these frames to the minimum length it is possible for the non-linear (fragment) data to be added to the end of the linear header in an SKB. Since the number of fragments is read before the padding and used afterward without reloading, the fragment that should have been consumed can be tacked on in place of part of the padding. The third commit in this set corrects this by removing the software padding and allowing the hardware to add the pad bytes if necessary. The first two commits resolve warnings observed by the kbuild test robot and are included here for simplicity of application. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Doug Berger authored
When commit 474ea9ca ("net: bcmgenet: correctly pad short packets") added the call to skb_padto() it should have been located before the nr_frags parameter was read since that value could be changed when padding packets with lengths between 55 and 59 bytes (inclusive). The use of a stale nr_frags value can cause corruption of the pad data when tx-scatter-gather is enabled. This corruption of the pad can cause invalid checksum computation when hardware offload of tx-checksum is also enabled. Since the original reason for the padding was corrected by commit 7dd39913 ("net: bcmgenet: fix skb_len in bcmgenet_xmit_single()") we can remove the software padding all together and make use of hardware padding of short frames as long as the hardware also always appends the FCS value to the frame. Fixes: 474ea9ca ("net: bcmgenet: correctly pad short packets") Signed-off-by: Doug Berger <opendmb@gmail.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Doug Berger authored
The 16-bit value that holds a short in network byte order should be declared as a restricted big endian type to allow type checks to succeed during assignment. Fixes: 3e370952 ("net: bcmgenet: add support for ethtool rxnfc flows") Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Doug Berger <opendmb@gmail.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Doug Berger authored
This function was originally removed by Baoyou Xie in commit e2072600 ("net: bcmgenet: remove unused function in bcmgenet.c") to prevent a build warning. Some of the functions removed by Baoyou Xie are now used for WAKE_FILTER support so his commit was reverted, but this function is still unused and the kbuild test robot dutifully reported the warning. This commit once again removes the remaining unused hfb functions. Fixes: 14da1510 ("Revert "net: bcmgenet: remove unused function in bcmgenet.c"") Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Doug Berger <opendmb@gmail.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 24 Jun, 2020 10 commits
-
-
Florian Westphal authored
check that 'nft ... ct helper set <foo>' works: 1. configure ftp helper via nft and assign it to connections on port 2121 2. check with 'conntrack -L' that the next connection has the ftp helper attached to it. Also add a test for auto-assign (old behaviour). Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
-
David Wilder authored
Using new helpers ip6t_unregister_table_pre_exit() and ip6t_unregister_table_exit(). Fixes: b9e69e12 ("netfilter: xtables: don't hook tables by default") Signed-off-by: David Wilder <dwilder@us.ibm.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
-
David Wilder authored
The pre_exit will un-register the underlying hook and .exit will do the table freeing. The netns core does an unconditional synchronize_rcu after the pre_exit hooks insuring no packets are in flight that have picked up the pointer before completing the un-register. Fixes: b9e69e12 ("netfilter: xtables: don't hook tables by default") Signed-off-by: David Wilder <dwilder@us.ibm.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
-
David Wilder authored
Using new helpers ipt_unregister_table_pre_exit() and ipt_unregister_table_exit(). Fixes: b9e69e12 ("netfilter: xtables: don't hook tables by default") Signed-off-by: David Wilder <dwilder@us.ibm.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
-
David Wilder authored
The pre_exit will un-register the underlying hook and .exit will do the table freeing. The netns core does an unconditional synchronize_rcu after the pre_exit hooks insuring no packets are in flight that have picked up the pointer before completing the un-register. Fixes: b9e69e12 ("netfilter: xtables: don't hook tables by default") Signed-off-by: David Wilder <dwilder@us.ibm.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
-
Rob Gill authored
The user tool modinfo is used to get information on kernel modules, including a description where it is available. This patch adds a brief MODULE_DESCRIPTION to netfilter kernel modules (descriptions taken from Kconfig file or code comments) Signed-off-by: Rob Gill <rrobgill@protonmail.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
-
Russell King authored
When using ip_set with counters and comment, traffic causes the kernel to panic on 32-bit ARM: Alignment trap: not handling instruction e1b82f9f at [<bf01b0dc>] Unhandled fault: alignment exception (0x221) at 0xea08133c PC is at ip_set_match_extensions+0xe0/0x224 [ip_set] The problem occurs when we try to update the 64-bit counters - the faulting address above is not 64-bit aligned. The problem occurs due to the way elements are allocated, for example: set->dsize = ip_set_elem_len(set, tb, 0, 0); map = ip_set_alloc(sizeof(*map) + elements * set->dsize); If the element has a requirement for a member to be 64-bit aligned, and set->dsize is not a multiple of 8, but is a multiple of four, then every odd numbered elements will be misaligned - and hitting an atomic64_add() on that element will cause the kernel to panic. ip_set_elem_len() must return a size that is rounded to the maximum alignment of any extension field stored in the element. This change ensures that is the case. Fixes: 95ad1f4a ("netfilter: ipset: Fix extension alignment") Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Acked-by: Jozsef Kadlecsik <kadlec@netfilter.org> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
-
Colin Ian King authored
The error DBG_STATUS_NO_MATCHING_FRAMING_MODE was added to the enum enum dbg_status however there is a missing corresponding entry for this in the array s_status_str. This causes an out-of-bounds read when indexing into the last entry of s_status_str. Fix this by adding in the missing entry. Addresses-Coverity: ("Out-of-bounds read"). Fixes: 2d22bc83 ("qed: FW 8.42.2.0 debug features") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jisheng Zhang says: ==================== net: phy: call phy_disable_interrupts() in phy_init_hw() We face an issue with rtl8211f, a pin is shared between INTB and PMEB, and the PHY Register Accessible Interrupt is enabled by default, so the INTB/PMEB pin is always active in polling mode case. As Heiner pointed out "I was thinking about calling phy_disable_interrupts() in phy_init_hw(), to have a defined init state as we don't know in which state the PHY is if the PHY driver is loaded. We shouldn't assume that it's the chip power-on defaults, BIOS or boot loader could have changed this. Or in case of dual-boot systems the other OS could leave the PHY in whatever state." patch1 makes phy_disable_interrupts() non-static so that it could be used in phy_init_hw() to have a defined init state. patch2 calls phy_disable_interrupts() in phy_init_hw() to have a defined init state. Since v3: - call phy_disable_interrupts() have interrupts disabled first then config_init, thank Florian Since v2: - Don't export phy_disable_interrupts() but just make it non-static Since v1: - EXPORT the correct symbol ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jisheng Zhang authored
Call phy_disable_interrupts() in phy_init_hw() to "have a defined init state as we don't know in which state the PHY is if the PHY driver is loaded. We shouldn't assume that it's the chip power-on defaults, BIOS or boot loader could have changed this. Or in case of dual-boot systems the other OS could leave the PHY in whatever state." as pointed out by Heiner. Suggested-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-