- 31 Mar, 2014 40 commits
-
-
Jakub Kicinski authored
ptp_tx_skb is always set before work is scheduled, work is cancelled before ptp_tx_skb is set to NULL. PTP work cannot ever see ptp_tx_skb set to NULL. Signed-off-by: Jakub Kicinski <kubakici@wp.pl> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
David Ertman authored
In commit da1e2046, the flow for enabling/disabling an Si errata workaround (e1000_lv_jumbo_workaround_ich8lan) was changed to fix a problem with iAMT connections dropping on interface down with jumbo frames set. Part of this change was to move the function call disabling the workaround to e1000e_down() from the e1000_setup_rctl() function. The mechanic for disabling of this workaround involves writing several MAC and PHY registers back to hardware defaults. After this commit, when the driver is loaded with the cable out, the PHY registers are not programmed with the correct default values. This causes the device to be capable of transmitting packets, but is unable to recieve them until this workaround is called. The flow of e1000e's open code relies upon calling the above workaround to expicitly program these registers either with jumbo frame appropriate settings or h/w defaults on 82579 and newer hardware. Fix this issue by adding logic to e1000_setup_rctl() that not only calls e1000_lv_jumbo_workaround_ich8lan() when jumbo frames are set, to enable the workaround, but also calls this function to explicitly disable the workaround in the case that jumbo frames are not set. Signed-off-by: Dave Ertman <davidx.m.ertman@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller authored
Conflicts: drivers/net/xen-netback/netback.c A bug fix overlapped with changing how the netback SKB control block is implemented. Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-nextDavid S. Miller authored
John W. Linville says: ==================== pull request: wireless-next 2014-03-31 Please accept this one last round of general wireless updates for the 3.15 merge window! For the Bluetooth bits, Gustavo says: "Here follow another set of patches to 3.15. This is mostly a bug fix pull request with the exception of one commit from Marcel which adds tracking to the current configured LE scan type parameter." Beyond that, notable bits include some final refactoring of rtl8180 and the addition of the rtl8187se driver, fixes for a number of problems identified by Dan Carpenter and his static analysis tools, and a handful of other bits here and there. Please let me know if there are problems! ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Aring authored
While transmit over a at86rf231 device and unloading the module I got: [ 29.643073] WARNING: CPU: 0 PID: 3 at kernel/workqueue.c:1335 __queue_work+0xb4/0x224() [ 29.651457] Modules linked in: at86rf230(-) autofs4 [ 29.656612] CPU: 0 PID: 3 Comm: ksoftirqd/0 Tainted: G W 3.14.0-rc6-01602-g902659e-dirty #294 [ 29.666490] [<c00124f0>] (unwind_backtrace) from [<c0010ad0>] (show_stack+0x10/0x14) [ 29.674628] [<c0010ad0>] (show_stack) from [<c0032c80>] (warn_slowpath_common+0x60/0x80) [ 29.683116] [<c0032c80>] (warn_slowpath_common) from [<c0032d30>] (warn_slowpath_null+0x18/0x20) [ 29.692329] [<c0032d30>] (warn_slowpath_null) from [<c0045b08>] (__queue_work+0xb4/0x224) [ 29.700906] [<c0045b08>] (__queue_work) from [<c0045cc8>] (queue_work_on+0x50/0x78) [ 29.708944] [<c0045cc8>] (queue_work_on) from [<c05669cc>] (mac802154_tx+0x1e4/0x240) [ 29.717164] [<c05669cc>] (mac802154_tx) from [<c0471814>] (dev_hard_start_xmit+0x2f0/0x43c) [ 29.725926] [<c0471814>] (dev_hard_start_xmit) from [<c04878d0>] (sch_direct_xmit+0x64/0x2a0) [ 29.734867] [<c04878d0>] (sch_direct_xmit) from [<c0487c38>] (__qdisc_run+0x12c/0x18c) [ 29.743169] [<c0487c38>] (__qdisc_run) from [<c046e1b0>] (net_tx_action+0xe0/0x178) [ 29.751205] [<c046e1b0>] (net_tx_action) from [<c0036690>] (__do_softirq+0x100/0x264) [ 29.759420] [<c0036690>] (__do_softirq) from [<c0036818>] (run_ksoftirqd+0x24/0x4c) [ 29.767453] [<c0036818>] (run_ksoftirqd) from [<c005232c>] (smpboot_thread_fn+0x128/0x13c) [ 29.776121] [<c005232c>] (smpboot_thread_fn) from [<c004c3fc>] (kthread+0xd0/0xe4) [ 29.784061] [<c004c3fc>] (kthread) from [<c000da88>] (ret_from_fork+0x14/0x2c) [ 29.791628] ---[ end trace 3406ff24bd973834 ]--- The problem is there are still interrupts after deregister ieee802154 device. This patch mask all interrupts in the at86rf2xx chips before deregister the device. Signed-off-by: Alexander Aring <alex.aring@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hannes Frederic Sowa authored
After commit c15b1cca ("ipv6: move DAD and addrconf_verify processing to workqueue") some counters are now updated in process context and thus need to disable bh before doing so, otherwise deadlocks can happen on 32-bit archs. Fabio Estevam noticed this while while mounting a NFS volume on an ARM board. As a compensation for missing this I looked after the other *_STATS_BH and found three other calls which need updating: 1) icmp6_send: ip6_fragment -> icmpv6_send -> icmp6_send (error handling) 2) ip6_push_pending_frames: rawv6_sendmsg -> rawv6_push_pending_frames -> ... (only in case of icmp protocol with raw sockets in error handling) 3) ping6_v6_sendmsg (error handling) Fixes: c15b1cca ("ipv6: move DAD and addrconf_verify processing to workqueue") Reported-by: Fabio Estevam <festevam@gmail.com> Tested-by: Fabio Estevam <fabio.estevam@freescale.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Lucas Stach authored
Though we made sure to acquire a valid MAC for the netdevice we never actually programmed it into the hardware. So if the bootloader did not set the MAC, network operation would only work if userspace explicitly asked to transfer the MAC to hardware. Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hannes Frederic Sowa authored
First off, we don't need to check for non-NULL rt any more, as we are guaranteed to always get a valid rt6_info. Drop the check. In case we couldn't allocate an inet_peer for fragmentation information we currently generate strictly incrementing fragmentation ids for all destination. This is done to maximize the cycle and avoid collisions. Those fragmentation ids are very predictable. At least we should try to mix in the destination address. While it should make no difference to simply use a PRNG at this point, secure_ipv6_id ensures that we don't leak information from prandom, so its internal state could be recoverable. This fallback function should normally not get used thus this should not affect performance at all. It is just meant as a safety net. Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Main difference between napi_frags_skb() and napi_gro_receive() is that the later is called while ethernet header was already pulled by the NIC driver (eth_type_trans() was called before napi_gro_receive()) Jerry Chu in commit 299603e8 ("net-gro: Prepare GRO stack for the upcoming tunneling support") tried to remove this difference by calling eth_type_trans() from napi_frags_skb() instead of doing this later from napi_frags_finish() Goal was that napi_gro_complete() could call ptype->callbacks.gro_complete(skb, 0) (offset of first network header = 0) Also, xxx_gro_receive() handlers all use off = skb_gro_offset(skb) to point to their own header, for the current skb and ones held in gro_list Problem is this cleanup work defeated the frag0 optimization: It turns out the consecutive pskb_may_pull() calls are too expensive. This patch brings back the frag0 stuff in napi_frags_skb(). As all skb have their mac header in skb head, we no longer need skb_gro_mac_header() Reported-by: Michal Schmidt <mschmidt@redhat.com> Fixes: 299603e8 ("net-gro: Prepare GRO stack for the upcoming tunneling support") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Jerry Chu <hkchu@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sasha Levin authored
Binding might result in a NULL device which is later dereferenced without checking. Signed-off-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
david decotigny authored
This allows to monitor carrier on/off transitions and detect link flapping issues: - new /sys/class/net/X/carrier_changes - new rtnetlink IFLA_CARRIER_CHANGES (getlink) Tested: - grep . /sys/class/net/*/carrier_changes + ip link set dev X down/up + plug/unplug cable - updated iproute2: prints IFLA_CARRIER_CHANGES - iproute2 20121211-2 (debian): unchanged behavior Signed-off-by: David Decotigny <decot@googlers.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wang Yufen authored
The issue raises when adding policy route, specify a particular NIC as oif, the policy route did not take effect. The reason is that fl6.oif is not set and route map failed. From the tcp_v6_send_response function, if the binding address is linklocal, fl6.oif is set, but not for global address. Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wang Yufen authored
Move the whole rt6_need_strict as static inline into ip6_route.h, so that it can be reused Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wang Yufen authored
Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Florian Fainelli says: ==================== net: document sysfs entries This patchset attempts to document the basic set of sysfs entries that are exposed by netdevices in /sys/class/net/<iface>/ I did not go before the pre-git era, so the oldest entries are marked with the 2.6.12 kernel version and dated of April 2005. Future patches will document the queues/ and statistics/ directories as well. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
NET_ADDR_* values are exported in the /sys/class/net/<iface>/addr_assign_type sysfs attributes, and as such constitutes an user-space ABI. Move the NET_ADDR_* definitions from include/linux/netdevice.h to include/uapi/linux/netdevice.h Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
Add sysfs attributes Documentation entries for the basic set of attributes that are exposed by a network device in /sys/class/net/<iface>/ Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yegor Yefremov authored
This device provides QMI and ethernet functionality via a standard CDC ethernet descriptor. But when driven by cdc_ether, the QMI functionality is unavailable because only cdc_ether can claim the USB interface. Thus blacklist the device in cdc_ether and add its IDs to qmi_wwan, which enables both QMI and ethernet simultaneously. Signed-off-by: Yegor Yefremov <yegorslists@googlemail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Vlad Yasevich says: ==================== bridge: Fix forwarding of 8021AD frames Bridge has its own way to deterine if the packet is forwardable and it doesn't support 8021ad tags correctly. Instead just allow bridge to use an existing is_skb_forwardable() function. v2: Fix missing hunk in patch 2/2 to make it build. v3: Fix indent for is_skb_forwardable ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Yasevich authored
Use existing function instead of trying to use our own. Signed-off-by: Vlad Yasevich <vyasevic@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Yasevich authored
Signed-off-by: Vlad Yasevich <vyasevic@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John W. Linville authored
Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next into for-davem
-
Alexey Khoroshilov authored
If allocation of io_dmabuf fails, rtl8187_probe() calls usb_put_dev(udev) while usb_get_dev(udev) is not called yet. As a result refcnt is decremented incorrectly and usb_dev can be used after memory deallocation. Found by Linux Driver Verification project (linuxtesting.org). Signed-off-by: Alexey Khoroshilov <khoroshilov@ispras.ru> Acked-by: Larry Finger <Larry.Finger@lwfinger.net> Signed-off-by: John W. Linville <linville@tuxdriver.com>
-
Andrea Merello authored
In rtl8180/rtl8185/rtl8187se the register space is represented using packed structure type. Register are thus accessed using a pointer of this type. All registers are packed toghether, and only small gaps are present. However Rtl8187se has also some "sparse" registers, very far from the "main register block". It could be possible to access them by simply declare huge reserved blocks inside the register struct (and this causes NO memory waste). However, for various reasons, access to those "far" registers is done with special dedicated macros, without declaring them in the register struct. This is done in an intricate manner, that makes code less readable and caused static analisys tool to produce warnings. This patch keeps the "macro" mechanism, but it changes its implementation in a simplier and more straightforward way. Signed-off-by: Andrea Merello <andrea.merello@gmail.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
-
Dan Carpenter authored
There is a missing set of curly braces here so the debug output says "Probe confirm received" unintentionally. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
-
Amitkumar Karwar authored
[ 6630.450908] BUG: spinlock bad magic on CPU#1, ksdioirqd/mmc1/355 [ 6630.450914] Unable to handle kernel NULL pointer dereference at virtual address 0000004f [ 6630.450919] pgd = ecbd8000 [ 6630.450926] [0000004f] *pgd=00000000 [ 6630.450936] lock: 0xeea4ab08, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0 [ 6630.450939] Backtrace: [ 6630.450956] [<c010d354>] (unwind_backtrace+0x0/0x118) from [<c060c238>] (dump_stack+0x28/0x30) [ 6630.450960] Internal error: Oops: 5 [#1] SMP ARM [ 6630.450964] Modules linked in: uvcvideo videobuf2_vmalloc [ 6630.450980] [<c060c238>] (dump_stack+0x28/0x30) from [<c0315ab4>] (spin_dump+0x80/0x94) [ 6630.450988] [<c0315ab4>] (spin_dump+0x80/0x94) from [<c0315af4>] (spin_bug+0x2c/0x30) [ 6630.450996] [<c0315af4>] (spin_bug+0x2c/0x30) from [<c0315b80>] (do_raw_spin_lock+0x28/0x15c) [ 6630.451004] [<c0315b80>] (do_raw_spin_lock+0x28/0x15c) from [<c0610c24>] (_raw_spin_lock_irqsave+0x20/0x28) [ 6630.451016] [<c0610c24>] (_raw_spin_lock_irqsave+0x20/0x28) from [<bf07a7f4>] (mwifiex_exec_next_cmd +0x6c/0x45c [mwifiex]) [ 6630.451030] [<bf07a7f4>] (mwifiex_exec_next_cmd+0x6c/0x45c [mwifiex]) from [<bf07834c>] (mwifiex_main_process+0x2c8/0x464 [mwifiex]) [ 6630.451047] [<bf07834c>] (mwifiex_main_process+0x2c8/0x464 [mwifiex]) from [<bf0a093c>] (mwifiex_sdio_interrupt+0xc8/0x1cc [mwifiex_sdio] [ 6630.451064] [<bf0a093c>] (mwifiex_sdio_interrupt+0xc8/0x1cc [mwifiex_sdio]) from [<c04bbde0>] (sdio_irq_thread+0x178/0x31c) [ 6630.451079] [<c04bbde0>] (sdio_irq_thread+0x178/0x31c) from [<c0145514>] (kthread+0xc8/0xd8) [ 6630.451095] [<c0145514>] (kthread+0xc8/0xd8) from [<c0106118>] (ret_from_fork+0x14/0x20) This bug has introduced/exposed due to recent patch in which we cancel pending commands before suspend (using hs_enabling flag). The NULL pointer is dereferenced when both mwifiex_cancel_all_pending_cmd() and mwifiex_exec_next_cmd() try to access cmd pending queue simultaneously. Signed-off-by: Amitkumar Karwar <akarwar@marvell.com> Signed-off-by: Bing Zhao <bzhao@marvell.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
-
Andrea Merello authored
ANAPARAM3 register, defined in the rtl818x common register struct, is accessed as 16bit by rtl8187se and as 8bit by rtl8187b. Since I have no documentation about this, I can only stick to the reference code and to what is known to work. This issue has been addressed by a patch from Larry Finger that introduces an "union", in the register struct. In my last patch-set I applied it on the register struct, but I forget to update rtl8187 driver too. This patch does it. Suggested-by: Larry Finger <Larry.Finger@lwfinger.net> [ Original patch ] Signed-off-by: Andrea Merello <andrea.merello@gmail.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
-
Adam Lee authored
Some HP notebooks using this rtl8188ee hardware module can't get AP scan results with pin-based interrupts mode, enabling MSI interrupts mode could fix it. As RealTek's testing results, RTL8188EE works well with both MSI mode and pin-based mode fallback. Signed-off-by: Adam Lee <adam.lee@canonical.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
-
Adam Lee authored
Add MSI interrupts mode support, enable it when submodules' msi_support flag is true, also could fallback to pin-based interrupts mode if MSI interrupts mode fails. RealTek's policy(on modules which work well with MSI interrupts mode) is: > If the platform supports both MSI and pin-based, use MSI. > If the platform supports MSI only, use MSI. > If the platform supports pin-based only, use pin-based. Also as RealTek's testing results, RTL8188EE and RTL8723BE work well with both MSI mode and pin-based mode fallback. Signed-off-by: Adam Lee <adam.lee@canonical.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
-
Amitkumar Karwar authored
It has been observed that system hangs during suspend, if host sleep activation fails due to a missing interrupt from firmware. Use timeout variant, so that the thread will be woken up when timer expires. Signed-off-by: Amitkumar Karwar <akarwar@marvell.com> Signed-off-by: Bing Zhao <bzhao@marvell.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
-
Amitkumar Karwar authored
When a thread is interrupted by signal, all wait_event_interruptible calls after queueing commands return an error. Numbers of commands in pending queue are increased in this case. Sometimes all commands nodes in pool are filled. We will cancel pending commands when signal is received. Signed-off-by: Amitkumar Karwar <akarwar@marvell.com> Signed-off-by: Bing Zhao <bzhao@marvell.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
-
Amitkumar Karwar authored
When scan request is received, scan commands are prepared and queued into scan pending queue. There is a corner case when command nodes are full. So we stop queueing further scan commands and return an error. This patch makes sure that currently queued commands in scan pending queue are also freed in this case. Signed-off-by: Amitkumar Karwar <akarwar@marvell.com> Signed-off-by: Bing Zhao <bzhao@marvell.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
-
David S. Miller authored
Daniel Borkmann says: ==================== BPF updates We sat down and have heavily reworked the whole previous patchset from v10 [1] to address all comments/concerns. This patchset therefore *replaces* the internal BPF interpreter with the new layout as discussed in [1], and migrates some exotic callers to properly use the BPF API for a transparent upgrade. All other callers that already use the BPF API in a way it should be used, need no further changes to run the new internals. We also removed the sysctl knob entirely, and do not expose any structure to userland, so that implementation details only reside in kernel space. Since we are replacing the interpreter we had to migrate seccomp in one patch along with the interpreter to not break anything. When attaching a new filter, the flow can be described as following: i) test if jit compiler is enabled and can compile the user BPF, ii) if so, then go for it, iii) if not, then transparently migrate the filter into the new representation, and run it in the interpreter. Also, we have scratched the jit flag from the len attribute and made it as initial patch in this series as Pablo has suggested in the last feedback, thanks. For details, please refer to the patches themselves. We did extensive testing of BPF and seccomp on the new interpreter itself and also on the user ABIs and could not find any issues; new performance numbers as posted in patch 8 are also still the same. Please find more details in the patches themselves. For all the previous history from v1 to v10, see [1]. We have decided to drop the v11 as we have pedantically reworked the set, but of course, included all previous feedback. v3 -> v4: - Applied feedback from Dave regarding swap insns - Rebased on net-next v2 -> v3: - Rebased to latest net-next (i.e. w/ rxhash->hash rename) - Fixed patch 8/9 commit message/doc as suggested by Dave - Rest is unchanged v1 -> v2: - Rebased to latest net-next - Added static to ptp_filter as suggested by Dave - Fixed a typo in patch 8's commit message - Rest unchanged Thanks ! [1] http://thread.gmane.org/gmane.linux.kernel/1665858 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexei Starovoitov authored
Further extend the current BPF documentation to document new BPF engine internals. Joint work with Daniel Borkmann. Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexei Starovoitov authored
This patch replaces/reworks the kernel-internal BPF interpreter with an optimized BPF instruction set format that is modelled closer to mimic native instruction sets and is designed to be JITed with one to one mapping. Thus, the new interpreter is noticeably faster than the current implementation of sk_run_filter(); mainly for two reasons: 1. Fall-through jumps: BPF jump instructions are forced to go either 'true' or 'false' branch which causes branch-miss penalty. The new BPF jump instructions have only one branch and fall-through otherwise, which fits the CPU branch predictor logic better. `perf stat` shows drastic difference for branch-misses between the old and new code. 2. Jump-threaded implementation of interpreter vs switch statement: Instead of single table-jump at the top of 'switch' statement, gcc will now generate multiple table-jump instructions, which helps CPU branch predictor logic. Note that the verification of filters is still being done through sk_chk_filter() in classical BPF format, so filters from user- or kernel space are verified in the same way as we do now, and same restrictions/constraints hold as well. We reuse current BPF JIT compilers in a way that this upgrade would even be fine as is, but nevertheless allows for a successive upgrade of BPF JIT compilers to the new format. The internal instruction set migration is being done after the probing for JIT compilation, so in case JIT compilers are able to create a native opcode image, we're going to use that, and in all other cases we're doing a follow-up migration of the BPF program's instruction set, so that it can be transparently run in the new interpreter. In short, the *internal* format extends BPF in the following way (more details can be taken from the appended documentation): - Number of registers increase from 2 to 10 - Register width increases from 32-bit to 64-bit - Conditional jt/jf targets replaced with jt/fall-through - Adds signed > and >= insns - 16 4-byte stack slots for register spill-fill replaced with up to 512 bytes of multi-use stack space - Introduction of bpf_call insn and register passing convention for zero overhead calls from/to other kernel functions - Adds arithmetic right shift and endianness conversion insns - Adds atomic_add insn - Old tax/txa insns are replaced with 'mov dst,src' insn Performance of two BPF filters generated by libpcap resp. bpf_asm was measured on x86_64, i386 and arm32 (other libpcap programs have similar performance differences): fprog #1 is taken from Documentation/networking/filter.txt: tcpdump -i eth0 port 22 -dd fprog #2 is taken from 'man tcpdump': tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' -dd Raw performance data from BPF micro-benchmark: SK_RUN_FILTER on the same SKB (cache-hit) or 10k SKBs (cache-miss); time in ns per call, smaller is better: --x86_64-- fprog #1 fprog #1 fprog #2 fprog #2 cache-hit cache-miss cache-hit cache-miss old BPF 90 101 192 202 new BPF 31 71 47 97 old BPF jit 12 34 17 44 new BPF jit TBD --i386-- fprog #1 fprog #1 fprog #2 fprog #2 cache-hit cache-miss cache-hit cache-miss old BPF 107 136 227 252 new BPF 40 119 69 172 --arm32-- fprog #1 fprog #1 fprog #2 fprog #2 cache-hit cache-miss cache-hit cache-miss old BPF 202 300 475 540 new BPF 180 270 330 470 old BPF jit 26 182 37 202 new BPF jit TBD Thus, without changing any userland BPF filters, applications on top of AF_PACKET (or other families) such as libpcap/tcpdump, cls_bpf classifier, netfilter's xt_bpf, team driver's load-balancing mode, and many more will have better interpreter filtering performance. While we are replacing the internal BPF interpreter, we also need to convert seccomp BPF in the same step to make use of the new internal structure since it makes use of lower-level API details without being further decoupled through higher-level calls like sk_unattached_filter_{create,destroy}(), for example. Just as for normal socket filtering, also seccomp BPF experiences a time-to-verdict speedup: 05-sim-long_jumps.c of libseccomp was used as micro-benchmark: seccomp_rule_add_exact(ctx,... seccomp_rule_add_exact(ctx,... rc = seccomp_load(ctx); for (i = 0; i < 10000000; i++) syscall(199, 100); 'short filter' has 2 rules 'large filter' has 200 rules 'short filter' performance is slightly better on x86_64/i386/arm32 'large filter' is much faster on x86_64 and i386 and shows no difference on arm32 --x86_64-- short filter old BPF: 2.7 sec 39.12% bench libc-2.15.so [.] syscall 8.10% bench [kernel.kallsyms] [k] sk_run_filter 6.31% bench [kernel.kallsyms] [k] system_call 5.59% bench [kernel.kallsyms] [k] trace_hardirqs_on_caller 4.37% bench [kernel.kallsyms] [k] trace_hardirqs_off_caller 3.70% bench [kernel.kallsyms] [k] __secure_computing 3.67% bench [kernel.kallsyms] [k] lock_is_held 3.03% bench [kernel.kallsyms] [k] seccomp_bpf_load new BPF: 2.58 sec 42.05% bench libc-2.15.so [.] syscall 6.91% bench [kernel.kallsyms] [k] system_call 6.25% bench [kernel.kallsyms] [k] trace_hardirqs_on_caller 6.07% bench [kernel.kallsyms] [k] __secure_computing 5.08% bench [kernel.kallsyms] [k] sk_run_filter_int_seccomp --arm32-- short filter old BPF: 4.0 sec 39.92% bench [kernel.kallsyms] [k] vector_swi 16.60% bench [kernel.kallsyms] [k] sk_run_filter 14.66% bench libc-2.17.so [.] syscall 5.42% bench [kernel.kallsyms] [k] seccomp_bpf_load 5.10% bench [kernel.kallsyms] [k] __secure_computing new BPF: 3.7 sec 35.93% bench [kernel.kallsyms] [k] vector_swi 21.89% bench libc-2.17.so [.] syscall 13.45% bench [kernel.kallsyms] [k] sk_run_filter_int_seccomp 6.25% bench [kernel.kallsyms] [k] __secure_computing 3.96% bench [kernel.kallsyms] [k] syscall_trace_exit --x86_64-- large filter old BPF: 8.6 seconds 73.38% bench [kernel.kallsyms] [k] sk_run_filter 10.70% bench libc-2.15.so [.] syscall 5.09% bench [kernel.kallsyms] [k] seccomp_bpf_load 1.97% bench [kernel.kallsyms] [k] system_call new BPF: 5.7 seconds 66.20% bench [kernel.kallsyms] [k] sk_run_filter_int_seccomp 16.75% bench libc-2.15.so [.] syscall 3.31% bench [kernel.kallsyms] [k] system_call 2.88% bench [kernel.kallsyms] [k] __secure_computing --i386-- large filter old BPF: 5.4 sec new BPF: 3.8 sec --arm32-- large filter old BPF: 13.5 sec 73.88% bench [kernel.kallsyms] [k] sk_run_filter 10.29% bench [kernel.kallsyms] [k] vector_swi 6.46% bench libc-2.17.so [.] syscall 2.94% bench [kernel.kallsyms] [k] seccomp_bpf_load 1.19% bench [kernel.kallsyms] [k] __secure_computing 0.87% bench [kernel.kallsyms] [k] sys_getuid new BPF: 13.5 sec 76.08% bench [kernel.kallsyms] [k] sk_run_filter_int_seccomp 10.98% bench [kernel.kallsyms] [k] vector_swi 5.87% bench libc-2.17.so [.] syscall 1.77% bench [kernel.kallsyms] [k] __secure_computing 0.93% bench [kernel.kallsyms] [k] sys_getuid BPF filters generated by seccomp are very branchy, so the new internal BPF performance is better than the old one. Performance gains will be even higher when BPF JIT is committed for the new structure, which is planned in future work (as successive JIT migrations). BPF has also been stress-tested with trinity's BPF fuzzer. Joint work with Daniel Borkmann. Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Cc: Hagen Paul Pfeifer <hagen@jauu.net> Cc: Kees Cook <keescook@chromium.org> Cc: Paul Moore <pmoore@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: H. Peter Anvin <hpa@linux.intel.com> Cc: linux-kernel@vger.kernel.org Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
Similarly as in ppp, we need to migrate the ISDN/PPP code to make use of the sk_unattached_filter api in order to decouple having direct filter structure access. By using sk_unattached_filter_{create,destroy}, we can allow for the possibility to jit compile filters for faster filter verdicts as well. Joint work with Alexei Starovoitov. Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Karsten Keil <isdn@linux-pingi.de> Cc: isdn4linux@listserv.isdn4linux.de Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
For the ppp driver, there are currently two open-coded BPF filters in use, that is, pass_filter and active_filter. Migrate both to make proper use of sk_unattached_filter_{create,destroy} API so that the actual BPF code is decoupled from direct access, and filters can be jited as a side-effect by the internal filter compiler. Joint work with Alexei Starovoitov. Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Paul Mackerras <paulus@samba.org> Cc: linux-ppp@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
There are currently pch_gbe, cpts, and ixp4xx_eth drivers that open-code and reimplement a BPF classifier for the PTP protocol. Since all of them effectively do the very same thing and load the very same PTP/BPF filter, we can just consolidate that code by introducing ptp_classify_raw() in the time-stamping core framework which can be used in drivers. As drivers get initialized after bootstrapping the core networking subsystem, they can make use of ptp_insns wrapped through ptp_classify_raw(), which allows to simplify and remove PTP classifier setup code in drivers. Joint work with Alexei Starovoitov. Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Richard Cochran <richard.cochran@omicron.at> Cc: Jiri Benc <jbenc@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
This patch migrates an open-coded sk_run_filter() implementation with proper use of the BPF API, that is, sk_unattached_filter_create(). This migration is needed, as we will be internally transforming the filter to a different representation, and therefore needs to be decoupled. It is okay to do so as skb_timestamping_init() is called during initialization of the network stack in core initcall via sock_init(). This would effectively also allow for PTP filters to be jit compiled if bpf_jit_enable is set. For better readability, there are also some newlines introduced, also ptp_classify.h is only in kernel space. Joint work with Alexei Starovoitov. Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Richard Cochran <richard.cochran@omicron.at> Cc: Jiri Benc <jbenc@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
This patch basically does two things, i) removes the extern keyword from the include/linux/filter.h file to be more consistent with the rest of Joe's changes, and ii) moves filter accounting into the filter core framework. Filter accounting mainly done through sk_filter_{un,}charge() take care of the case when sockets are being cloned through sk_clone_lock() so that removal of the filter on one socket won't result in eviction as it's still referenced by the other. These functions actually belong to net/core/filter.c and not include/net/sock.h as we want to keep all that in a central place. It's also not in fast-path so uninlining them is fine and even allows us to get rd of sk_filter_release_rcu()'s EXPORT_SYMBOL and a forward declaration. Joint work with Alexei Starovoitov. Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Pavel Emelyanov <xemul@parallels.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-