- 27 Jan, 2015 6 commits
-
-
Nicolae Rosia authored
The driver is trying to acquire clocks which maybe are not available yet. Allow the driver to request deffered probe by providing a probe function and registering it with module_platform_driver. [1] This patch is based on 3.19-rc5. [1] https://lkml.org/lkml/2013/9/23/118Signed-off-by: Nicolae Rosia <nicolae.rosia@certsign.ro> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Erik Hugne authored
The flows are hashed on the sending node address, which allows us to spread out the TIPC link processing to RPS enabled cores. There is no point to include the destination address in the hash as that will always be the same for all inbound links. We have experimented with a 3-tuple hash over [srcnode, sport, dport], but this showed to give slightly lower performance because of increased lock contention when the same link was handled by multiple cores. Signed-off-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Erik Hugne <erik.hugne@ericsson.com> Reviewed-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Erik Hugne authored
If a large number of namespaces is spawned on a node and TIPC is enabled in each of these, the excessive printk tracing of network events will cause the system to grind down to a near halt. The traces are still of debug value, so instead of removing them completely we fix it by changing the link state and node availability logging debug traces. Signed-off-by: Erik Hugne <erik.hugne@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-nextDavid S. Miller authored
Kalle Valo says: ==================== pull-request: wireless-drivers-next 2015-01-22 now a bigger pull request for net-next. Rafal found a UTF-8 bug in patchwork[1] and because of that two commits (d0c102f7 and d0f66df5) have his name corrupted: Acked-by: Rafa? Mi?ecki <zajec5@gmail.com> Somehow I failed to spot that when I commited the patches. As rebasing public git trees is bad, I thought we can live with these and decided not to rebase. But I'll pay close attention to this in the future to make sure that it won't happen again. Also we requested an update to patchwork.kernel.org, the latest patchwork doesn't seem to have this bug. Also please note this pull request also adds one DT binding doc, but this was reviewed in the device tree list: .../bindings/net/wireless/qcom,ath10k.txt | 30 + Please let me know if you have any issues. [1] https://lists.ozlabs.org/pipermail/patchwork/2015-January/001261.html ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
Similarly as in cls_bpf, also this code needs to reject mismatches. Reference: http://article.gmane.org/gmane.linux.network/347406 Fixes: d23b8ad8 ("tc: add BPF based action") Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Acked-by: Jiri Pirko <jiri@resnulli.us> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
As soon as we've found a matching handle in basic_get(), we can return it. There's no need to continue walking until the end of a filter chain, since they are unique anyway. Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Acked-by: Jiri Pirko <jiri@resnulli.us> Cc: Thomas Graf <tgraf@suug.ch> Acked-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 26 Jan, 2015 25 commits
-
-
Sonic Zhang authored
This property define the AXI bug lenth. Signed-off-by: Sonic Zhang <sonic.zhang@analog.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sonic Zhang authored
Clear the TX COE bit when force_thresh_dma_mode is set even hardware dma capability says support. Tested on BF609. Signed-off-by: Sonic Zhang <sonic.zhang@analog.com> Acked-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sonic Zhang authored
stmmac: if force_thresh_dma_mode is set, pass tc to both txmode and rxmode in tx_hard_error_bump_tc interrupt Dont' pass SF_DMA_MODE to rxmode in this case. Signed-off-by: Sonic Zhang <sonic.zhang@analog.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Joe Stringer says: ==================== openvswitch: Introduce 128-bit unique flow identifiers. This series extends the openvswitch datapath interface for flow commands to use 128-bit unique identifiers as an alternative to the netlink-formatted flow key. This significantly reduces the cost of assembling messages between the kernel and userspace, in particular improving Open vSwitch revalidation performance by 40% or more. v14: - Perform lookup using unmasked key in legacy case. - Fix minor checkpatch.pl style violations. v13: - Embed sw_flow_id in sw_flow to save memory allocation in UFID case. - Malloc unmasked key for id in non-UFID case. - Fix bug where non-UFID case could double-serialize keys. v12: - Userspace patches fully merged into Open vSwitch master - New minor refactor patches (2,3,4) - Merge unmasked_key, ufid representation of flow identifier in sw_flow - Improve memory allocation sizes when serializing ufid - Handle corner case where a flow_new is requested with a flow that has an identical ufid as an existing flow, but a different flow key - Limit UFID to between 1-16 octets inclusive. - Add various helper functions to improve readibility v11: - Pushed most of the prerequisite patches for this series to OVS master. - Split out openvswitch.h interface changes from datapath implementation - Datapath implementation to be reviewed on net-next, separately v10: - New patch allowing datapath to serialize masked keys - Simplify datapath interface by accepting UFID or flow_key, but not both - Flows set up with UFID must be queried/deleted using UFID - Reduce sw_flow memory usage for UFID - Don't periodically rehash UFID table in linux datapath - Remove kernel_only UFID in linux datapath v9: - No kernel changes v8: - Rename UID -> UFID - Fix null dereference in datapath when paired with older userspace - All patches are reviewed/acked except datapath changes. v7: - Remove OVS_DP_F_INDEX_BY_UID - Rework datapath UID serialization for variable length UIDs v6: - Reduce netlink conversions for all datapaths - Various bugfixes v5: - Various bugfixes - Improve logging v4: - Datapath memory leak fixes - Enable UID-based terse dumping and deleting by default - Various fixes RFCv3: - Add datapath implementation ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Stringer authored
Previously, flows were manipulated by userspace specifying a full, unmasked flow key. This adds significant burden onto flow serialization/deserialization, particularly when dumping flows. This patch adds an alternative way to refer to flows using a variable-length "unique flow identifier" (UFID). At flow setup time, userspace may specify a UFID for a flow, which is stored with the flow and inserted into a separate table for lookup, in addition to the standard flow table. Flows created using a UFID must be fetched or deleted using the UFID. All flow dump operations may now be made more terse with OVS_UFID_F_* flags. For example, the OVS_UFID_F_OMIT_KEY flag allows responses to omit the flow key from a datapath operation if the flow has a corresponding UFID. This significantly reduces the time spent assembling and transacting netlink messages. With all OVS_UFID_F_OMIT_* flags enabled, the datapath only returns the UFID and statistics for each flow during flow dump, increasing ovs-vswitchd revalidator performance by 40% or more. Signed-off-by: Joe Stringer <joestringer@nicira.com> Acked-by: Pravin B Shelar <pshelar@nicira.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Stringer authored
The first user will be the next patch. Signed-off-by: Joe Stringer <joestringer@nicira.com> Acked-by: Pravin B Shelar <pshelar@nicira.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Stringer authored
These minor tidyups make a future patch a little tidier. Signed-off-by: Joe Stringer <joestringer@nicira.com> Acked-by: Pravin B Shelar <pshelar@nicira.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Stringer authored
Rework so that ovs_flow_tbl_insert() calls flow_{key,mask}_insert(). This tidies up a future patch. Signed-off-by: Joe Stringer <joestringer@nicira.com> Acked-by: Pravin B Shelar <pshelar@nicira.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Stringer authored
Refactor the ovs_nla_fill_match() function into separate netlink serialization functions ovs_nla_put_{unmasked_key,mask}(). Modify ovs_nla_put_flow() to handle attribute nesting and expose the 'is_mask' parameter - all callers need to nest the flow, and callers have better knowledge about whether it is serializing a mask or not. Signed-off-by: Joe Stringer <joestringer@nicira.com> Acked-by: Pravin B Shelar <pshelar@nicira.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Merge tag 'linux-can-next-for-3.20-20150121' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next Marc Kleine-Budde says: ==================== pull-request: can-next 2015-21-01 this is a pull request of 4 patches for net-next/master. Andri Yngvason contributes one patch to further consolidate the CAN state change handling. The next patch is by kbuild test robot/Fengguang Wu which fixes a coccinelle warning in the CAN infrastructure. The two last patches are by me, they remove a unused variable from the flexcan and at91_can driver. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Sergei Shtylyov says: ==================== sh_eth: massage PM code Here's a set of 2 patches against DaveM's 'net-next.git' repo. We're adding the support for suspend/hibernation as well as somewhat changing the existing code. There are still MDIO-related issue with suspend (kernel exception), we've been working on it and shall address it with a separate patch... ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mikhail Ulyanov authored
Add sh_eth_{suspend|resume}() implementing {suspend|resume|freeze|thaw|poweroff| restore}() PM methods to make it possible to restore from hibernation not only in Linux but also in e.g. U-Boot and to have more determined state on resume/ restore. Signed-off-by: Mikhail Ulyanov <mikhail.ulyanov@cogentembedded.com> [Sergei: moved sh_eth_{suspend|resume}() before sh_eth_runtime_nop(), enclosed them with #ifdef CONFIG_PM_SLEEP, reordered the local variables, got rid of *goto* and label, reordered macro invocations, renamed, modified the changelog.] Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mikhail Ulyanov authored
Use SET_RUNTIME_PM_OPS() macro to initialize the runtime PM method pointers in the 'struct dev_pm_ops'. Signed-off-by: Mikhail Ulyanov <mikhail.ulyanov@cogentembedded.com> [Sergei: renamed, added the changelog.] Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Beniamino Galvani authored
Add support for Byte Queue Limits to the STMicro MAC driver. Tested on a Amlogic S802 quad Cortex-A9 board, where the use of BQL decreases the latency of a high priority ping from ~12ms to ~1ms when the 100Mbit link is saturated by 20 TCP streams. Signed-off-by: Beniamino Galvani <b.galvani@gmail.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hariprasad Shenai authored
commit b5a02f50 ("cxgb4 : Update ipv6 address handling api") introduced a regression where unregister cxgb4_inet6addr_notifier wasn't getting called during module_exit. Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
As removals can occur during resizes, entries may be referred to from both tbl and future_tbl when the removal is requested. Therefore rhashtable_remove() must unlink the entry in both tables if this is the case. The existing code did search both tables but stopped when it hit the first match. Failing to unlink in both tables resulted in use after free. Fixes: 97defe1e ("rhashtable: Per bucket locks & deferred expansion/shrinking") Reported-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
IPv6 TCP sockets store in np->pktoptions skbs, and use skb_set_owner_r() to charge the skb to socket. It means that destructor must be called while socket is locked. Therefore, we cannot use skb_get() or atomic_inc(&skb->users) to protect ourselves : kfree_skb() might race with other users manipulating sk->sk_forward_alloc Fix this race by holding socket lock for the duration of ip6_datagram_recv_ctl() Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Patrick McHardy authored
"next" is not updated, causing an endless loop for buckets with more than one element. Fixes: 88d6ed15 ("rhashtable: Convert bucket iterators to take table and index") Signed-off-by: Patrick McHardy <kaber@trash.net> Acked-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shaohui Xie authored
spin_event_timeout() is PPC dependent, use an arch independent equivalent instead. Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shaohui Xie authored
Use ioread32be() & iowrite32be() instead. Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
In commit 5a7baa78 ("bonding: Advertize vxlan offload features when supported"), Or Gerlitz added support conditional vxlan offload. In this patch I also add support for all kind of tunnels, but we allow a bonding device to not require segmentation, as it is always better to make this segmentation at the very last stage, if a particular slave device requires it. Tested: Setup a GRE tunnel, on a physical NIC not having tx-gre-segmentation. Results on bnx2x are even better, as we no longer have to segment in software. ethtool -K bond0 tx-gre-segmentation off super_netperf 50 --google-pacing-rate 30000000 -H 10.7.8.152 -l 15 7538.32 ethtool -K bond0 tx-gre-segmentation on super_netperf 50 --google-pacing-rate 30000000 -H 10.7.8.152 -l 15 10200.5 Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
Static checkers complain that we should maybe set "ret" before we do the "goto out;". They interpret the NULL return from br_port_get_rtnl() as a failure and forgetting to set the error code is a common bug in this situation. The code is confusing but it's actually correct. We are returning zero deliberately. Let's re-write it a bit to be more clear. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Florian Fainelli says: ==================== net: phy and dsa random fixes/cleanups These two patches were already present as part of my attempt to make DSA modules work properly, these are the only two "valid" patches at this point which should not need any further rework. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
Factor the interrupt disabling in a function: bcm_sf2_intr_disable() since we are doing the same thing in the setup and suspend paths. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
fixed_phy_set_link_update() contains an early check against a NULL callback pointer, which basically prevents us from removing any previous callback we may have set. The users of the fp->link_update callback deal with a NULL callback just fine, so we really want to allow "removing" a link_update callback to avoid dangling callback pointers during e.g: module removal. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 25 Jan, 2015 9 commits
-
-
Harout Hedeshian authored
The kernel forcefully applies MTU values received in router advertisements provided the new MTU is less than the current. This behavior is undesirable when the user space is managing the MTU. Instead a sysctl flag 'accept_ra_mtu' is introduced such that the user space can control whether or not RA provided MTU updates should be applied. The default behavior is unchanged; user space must explicitly set this flag to 0 for RA MTUs to be ignored. Signed-off-by: Harout Hedeshian <harouth@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Alexander Duyck says: ==================== Fixes and improvements for recent fib_trie updates While performing testing and prepping the next round of patches I found a few minor issues and improvements that could be made. These changes should help to reduce the overall code size and improve the performance slighlty as I noticed a 20ns or so improvement in my worst-case testing which will likely only result in a 1ns difference with a standard sized trie. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
While doing further work on the fib_trie I noted a few items. First I was using calls that were far more complicated than they needed to be for determining when to push/pull the suffix length. I have updated the code to reflect the simplier logic. The second issue is that I realised we weren't necessarily handling the case of a leaf_info struct surviving a flush. I have updated the logic so that now we will call pull_suffix in the event of having a leaf info value left in the leaf after flushing it. Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
The function fib_find_alias is only accessed by functions in fib_trie.c as such it makes sense to relocate it and cast it as static so that the compiler can take advantage of optimizations it can do to it as a local function. Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
It doesn't make much sense to count the pointers ourselves when empty_children already has a count for the number of NULL pointers stored in the tnode. As such save ourselves the cycles and just use empty_children. Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This patch really does two things. First it pulls the logic for determining if we should collapse one node out of the tree and the actual code doing the collapse into a separate pair of functions. This helps to make the changes to these areas more readable. Second it encodes the upper 32b of the empty_children value onto the full_children value in the case of bits == KEYLENGTH. By doing this we are able to handle the case of a 32b node where empty_children would appear to be 0 when it was actually 1ul << 32. Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This change corrects an issue where if inflate or halve fails we were exiting the resize function without at least updating the slen for the node. To correct this I have moved the update of max_size into the while loop so that it is only decremented on a successful call to either inflate or halve. Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This patch addresses two issues. The first issue is the fact that I believe I had the RCU freeing sequence slightly out of order. As a result we could get into an issue if a caller went into a child of a child of the new node, then backtraced into the to be freed parent, and then attempted to access a child of a child that may have been consumed in a resize of one of the new nodes children. To resolve this I have moved the resize after we have freed the oldtnode. The only side effect of this is that we will now be calling resize on more nodes in the case of inflate due to the fact that we don't have a good way to test to see if a full_tnode on the new node was there before or after the allocation. This should have minimal impact however since the node should already be correctly size so it is just the cost of calling should_inflate that we will be taking on the node which is only a couple of cycles. The second issue is the fact that inflate and halve were essentially doing the same thing after the new node was added to the trie replacing the old one. As such it wasn't really necessary to keep the code in both functions so I have split it out into two other functions, called replace and update_children. Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
In doing performance testing and analysis of the changes I recently found that by shifting the index I had created an unnecessary dependency. I have updated the code so that we instead shift a mask by bits and then just test against that as that should save us about 2 CPU cycles since we can generate the mask while the key and pos are being processed. Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-