- 04 Jun, 2016 23 commits
-
-
Andrew Lunn authored
The existing DSA binding has a number of limitations and problems. The main problem is that it cannot represent a switch as a linux device, hanging off some bus. It is limited to one CPU port. The DSA platform device is artificial, and does not really represent hardware. Implement a new binding which can be embedded into any type of node on a bus to represent one switch device, and its links to other switches. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
Have the switch driver register its own MDIO bus. This allows for an mdio property in the device tree, with child nodes for phys, which can be referenced via phandles, etc. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
The switch implements a generic MDIO bus, which could host more than PHYs. It is conventional to use _mdio_ or _mii_ in the function name, so rename them. Also postfix make the historically first read/write function with _direct, to help distinguish it from _indirect and _ppu. While touching these functions, remove some of the _ prefixes, which we are deprecating. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
The switch may want to instantiate its own MDIO bus. Only do it centrally if the switch has not already created one, and the read op is implemented. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
Replace the two switch statements with an array lookup, and store the result in the dsa tree structure. The drivers no longer need to know the selected tag protocol, so remove it from the dsa switch structure. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
The merged driver no longer offers the option to use DSA tagging. So remove the code to setup the switch to do DSA tagging and hard code the use of EDSA. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>y Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
Refactor the code to setup a single DSA/CPU port into a function of its own, and export it, so it can be used by the new binding. Similarly, refactor the destroy code into a function. When destroying the ports, don't put the of node. They should be released at the end along with the normal ports. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
The new binding will not have a chip data structure, it will place the routing directly into the switch structure. To enable backwards compatibility, copy the routing from the chip data into the switch structure. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
With a maximum of four switches, the size of the routing table is the same as the pointer to it. Removing it makes the code simpler. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
Move the port device node structure into the port structure, from the chip data. This information is needed in the next step of implementing the new binding. The chip data structure is used while parsing the whole old binding, before the individual switch structures exist. With the new bindings, this is reversed, the switches exist first, and the interconnections between the switches is derived from the individual switch bindings. Thus this chip data structure becomes unneeded. Signed-off-by: Andrew Lunn <andrew@lunn.ch> eviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
There are going to be more per-port members added to the switch structure. So add a port structure and move the netdev into it. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
The platform data nr_chips is used when validating a received packet, to ensure it comes from a know switch chip. The number of possible switches is limited to DSA_MAX_SWITCHES, so use this as the first validation step. The new binding allows holes in the dst->ds[] array, so also ensure ensure there is a valid dsa_switch for this packet. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
The DSA layer should no longer assume the switch is connected to an MDIO bus. As a result, we cannot use the address on the MDIO bus when forming the name of the switches internal MDIO bus for its builtin and possibly external PHYs. The switch index is sufficient to make the name unique, so drop the MDIO address. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Lock debugging shows that there is a possible circular lock in the PPU work code. Switch the lock order of smi_mutex and ppu_mutex to fix this. Here's the full trace: [ 4.341325] ====================================================== [ 4.347519] [ INFO: possible circular locking dependency detected ] [ 4.353800] 4.6.0 #4 Not tainted [ 4.357039] ------------------------------------------------------- [ 4.363315] kworker/0:1/328 is trying to acquire lock: [ 4.368463] (&ps->smi_mutex){+.+.+.}, at: [<8049c758>] mv88e6xxx_reg_read+0x30/0x54 [ 4.376313] [ 4.376313] but task is already holding lock: [ 4.382160] (&ps->ppu_mutex){+.+...}, at: [<8049cac0>] mv88e6xxx_ppu_reenable_work+0x28/0xd4 [ 4.390772] [ 4.390772] which lock already depends on the new lock. [ 4.390772] [ 4.398963] [ 4.398963] the existing dependency chain (in reverse order) is: [ 4.406461] [ 4.406461] -> #1 (&ps->ppu_mutex){+.+...}: [ 4.410897] [<806d86bc>] mutex_lock_nested+0x54/0x360 [ 4.416606] [<8049a800>] mv88e6xxx_ppu_access_get+0x28/0x100 [ 4.422906] [<8049b778>] mv88e6xxx_phy_read+0x90/0xdc [ 4.428599] [<806a4534>] dsa_slave_phy_read+0x3c/0x40 [ 4.434300] [<804943ec>] mdiobus_read+0x68/0x80 [ 4.439481] [<804939d4>] get_phy_device+0x58/0x1d8 [ 4.444914] [<80493ed0>] mdiobus_scan+0x24/0xf4 [ 4.450078] [<8049409c>] __mdiobus_register+0xfc/0x1ac [ 4.455857] [<806a40b0>] dsa_probe+0x860/0xca8 [ 4.460934] [<8043246c>] platform_drv_probe+0x5c/0xc0 [ 4.466627] [<804305a0>] driver_probe_device+0x118/0x450 [ 4.472589] [<80430b00>] __device_attach_driver+0xac/0x128 [ 4.478724] [<8042e350>] bus_for_each_drv+0x74/0xa8 [ 4.484235] [<804302d8>] __device_attach+0xc4/0x154 [ 4.489755] [<80430cec>] device_initial_probe+0x1c/0x20 [ 4.495612] [<8042f620>] bus_probe_device+0x98/0xa0 [ 4.501123] [<8042fbd0>] deferred_probe_work_func+0x4c/0xd4 [ 4.507328] [<8013a794>] process_one_work+0x1a8/0x604 [ 4.513030] [<8013ac54>] worker_thread+0x64/0x528 [ 4.518367] [<801409e8>] kthread+0xec/0x100 [ 4.523201] [<80108f30>] ret_from_fork+0x14/0x24 [ 4.528462] [ 4.528462] -> #0 (&ps->smi_mutex){+.+.+.}: [ 4.532895] [<8015ad5c>] lock_acquire+0xb4/0x1dc [ 4.538154] [<806d86bc>] mutex_lock_nested+0x54/0x360 [ 4.543856] [<8049c758>] mv88e6xxx_reg_read+0x30/0x54 [ 4.549549] [<8049cad8>] mv88e6xxx_ppu_reenable_work+0x40/0xd4 [ 4.556022] [<8013a794>] process_one_work+0x1a8/0x604 [ 4.561707] [<8013ac54>] worker_thread+0x64/0x528 [ 4.567053] [<801409e8>] kthread+0xec/0x100 [ 4.571878] [<80108f30>] ret_from_fork+0x14/0x24 [ 4.577139] [ 4.577139] other info that might help us debug this: [ 4.577139] [ 4.585159] Possible unsafe locking scenario: [ 4.585159] [ 4.591093] CPU0 CPU1 [ 4.595631] ---- ---- [ 4.600169] lock(&ps->ppu_mutex); [ 4.603693] lock(&ps->smi_mutex); [ 4.609742] lock(&ps->ppu_mutex); [ 4.615790] lock(&ps->smi_mutex); [ 4.619314] [ 4.619314] *** DEADLOCK *** [ 4.619314] [ 4.625256] 3 locks held by kworker/0:1/328: [ 4.629537] #0: ("events"){.+.+..}, at: [<8013a704>] process_one_work+0x118/0x604 [ 4.637288] #1: ((&ps->ppu_work)){+.+...}, at: [<8013a704>] process_one_work+0x118/0x604 [ 4.645653] #2: (&ps->ppu_mutex){+.+...}, at: [<8049cac0>] mv88e6xxx_ppu_reenable_work+0x28/0xd4 [ 4.654714] [ 4.654714] stack backtrace: [ 4.659098] CPU: 0 PID: 328 Comm: kworker/0:1 Not tainted 4.6.0 #4 [ 4.665286] Hardware name: Freescale Vybrid VF5xx/VF6xx (Device Tree) [ 4.671748] Workqueue: events mv88e6xxx_ppu_reenable_work [ 4.677174] Backtrace: [ 4.679674] [<8010d354>] (dump_backtrace) from [<8010d5a0>] (show_stack+0x20/0x24) [ 4.687252] r6:80fb3c88 r5:80fb3c88 r4:80fb4728 r3:00000002 [ 4.693003] [<8010d580>] (show_stack) from [<803b45e8>] (dump_stack+0x24/0x28) [ 4.700246] [<803b45c4>] (dump_stack) from [<80157398>] (print_circular_bug+0x208/0x32c) [ 4.708361] [<80157190>] (print_circular_bug) from [<8015a630>] (__lock_acquire+0x185c/0x1b80) [ 4.716982] r10:9ec22a00 r9:00000060 r8:8164b6bc r7:00000040 r6:00000003 r5:8163a5b4 [ 4.724905] r4:00000003 r3:9ec22de8 [ 4.728537] [<80158dd4>] (__lock_acquire) from [<8015ad5c>] (lock_acquire+0xb4/0x1dc) [ 4.736378] r10:60000013 r9:00000000 r8:00000000 r7:00000000 r6:9e5e9c50 r5:80e618e0 [ 4.744301] r4:00000000 [ 4.746879] [<8015aca8>] (lock_acquire) from [<806d86bc>] (mutex_lock_nested+0x54/0x360) [ 4.754976] r10:9e5e9c1c r9:80e616c4 r8:9f685ea0 r7:0000001b r6:9ec22a00 r5:8163a5b4 [ 4.762899] r4:9e5e9c1c [ 4.765477] [<806d8668>] (mutex_lock_nested) from [<8049c758>] (mv88e6xxx_reg_read+0x30/0x54) [ 4.774008] r10:80e60c5b r9:80e616c4 r8:9f685ea0 r7:0000001b r6:00000004 r5:9e5e9c10 [ 4.781930] r4:9e5e9c1c [ 4.784507] [<8049c728>] (mv88e6xxx_reg_read) from [<8049cad8>] (mv88e6xxx_ppu_reenable_work+0x40/0xd4) [ 4.793907] r7:9ffd5400 r6:9e5e9c68 r5:9e5e9cb0 r4:9e5e9c10 [ 4.799659] [<8049ca98>] (mv88e6xxx_ppu_reenable_work) from [<8013a794>] (process_one_work+0x1a8/0x604) [ 4.809059] r9:80e616c4 r8:9f685ea0 r7:9ffd5400 r6:80e0a1c8 r5:9f5f2e80 r4:9e5e9cb0 [ 4.816910] [<8013a5ec>] (process_one_work) from [<8013ac54>] (worker_thread+0x64/0x528) [ 4.825010] r10:9f5f2e80 r9:00000008 r8:80e0dc80 r7:80e0a1fc r6:80e0a1c8 r5:9f5f2e98 [ 4.832933] r4:80e0a1c8 [ 4.835510] [<8013abf0>] (worker_thread) from [<801409e8>] (kthread+0xec/0x100) [ 4.842827] r10:00000000 r9:00000000 r8:00000000 r7:8013abf0 r6:9f5f2e80 r5:9ec15740 [ 4.850749] r4:00000000 [ 4.853327] [<801408fc>] (kthread) from [<80108f30>] (ret_from_fork+0x14/0x24) [ 4.860557] r7:00000000 r6:00000000 r5:801408fc r4:9ec15740 Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
The new binding does not make use of dsa_chip_data, a.k.a cd. When retrieving the size of the EEPROM attached to a switch, don't assume there is a cd attached to the switch structure. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
size_t objects should be printed with %Z printf format. Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yuval Mintz authored
Commit a91eb52a ("qed: Revisit chain implementation") contains an incorrect implementation for BE platforms, as device's regpairs containing addresses are LE and they're not converted correctly when read back. In addition, it raises a compilation warning for 32-bit platforms where dma_addr_t is a 32-bit variable. Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Yuval Mintz says: ==================== qed: RocE & iSCSI infrastructure We plan on sending 2 new protocol drivers in the imminent future - both our RoCE [qedr] and iSCSI [qedi] drivers. As both submissions would be rather massive and in order to avoid collisions between them, the common infrastructure on the qed side was prepared as an independent patch-series to be sent ahead of those 2 submissions. This patch series introduces in QED 2 new 'ids' - one for iscsi and one for roce. It then goes and adds logic required for configuring said protocols in HW. Notice it *doesn't* actually add any client using said ids, but rather only the infrastructure to allow their later usage. What this patch doesn't contain is the slowpath protocol-configuration toward the firmware. I.e., it contains register-setting logic, memory allocations, etc., but not actual flow-related configuration specific to the protocl. Those would be sent as part of the protocol driver submissions. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yuval Mintz authored
RoCE and iSCSI would require some added/changed hw configuration in order to properly run; The biggest single change being the requirement of allocating and mapping host memory for several HW blocks that aren't being used by qede [SRC, QM, TM, etc.]. In addition, whereas qede is only using context memory for HW blocks, the new protocol would also require task memories to be added. Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yuval Mintz authored
This patch adds in the ecore 2 new personalities in addition to QED_PCI_ETH - QED_PCI_ISCSI and QED_PCI_ETH_ROCE. Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yuval Mintz authored
This adds the qed portion of the RoCE & iSCSI firmware HSI, as well as adding several new common HSI files which would be required by both qed and qed* protocols. Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yuval Mintz authored
RoCE driver is going to need a 32-bit chain [current chain implementation for qed* currently supports only 16-bit producer/consumer chains]. This patch adds said support, as well as doing other slight tweaks and modifications to qed's chain API. Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 03 Jun, 2016 13 commits
-
-
Ivan Khoronzhuk authored
There is no reason in this lock. At least for now. Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Use the more common kernel logging style and reduce object size. The logging message prefix changes from a mixture of "RxRPC:" and "RXRPC:" to "af_rxrpc: ". $ size net/rxrpc/built-in.o* text data bss dec hex filename 64172 1972 8304 74448 122d0 net/rxrpc/built-in.o.new 67512 1972 8304 77788 12fdc net/rxrpc/built-in.o.old Miscellanea: o Consolidate the ASSERT macros to use a single pr_err call with decimal and hexadecimal output and a stringified #OP argument Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Haiyang Zhang authored
Added a condition to avoid vlan devices with same MAC registering as VF. Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Marcelo Ricardo Leitner says: ==================== sctp: Add GSO support This patchset adds sctp GSO support. Performance tests indicates that increases throughput by 10% if using bigger chunk sizes, specially if bigger than MTU. For small chunks, it doesn't help much if not using heavy firewall rules. For small chunks it will probably be of more use once we get something like MSG_MORE as David Laight had suggested. overall changes: v1->v2: Added support for receiving GSO frames on SCTP stack, as requested by Dave Miller. v2->v3: Consider sctphdr size in skb_gso_transport_seglen() rebased due to 5c7cdf33 ("gso: Remove arbitrary checks for unsupported GSO") ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
This is useful for debugging packet sizes. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Tested-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
SCTP has this pecualiarity that its packets cannot be just segmented to (P)MTU. Its chunks must be contained in IP segments, padding respected. So we can't just generate a big skb, set gso_size to the fragmentation point and deliver it to IP layer. This patch takes a different approach. SCTP will now build a skb as it would be if it was received using GRO. That is, there will be a cover skb with protocol headers and children ones containing the actual segments, already segmented to a way that respects SCTP RFCs. With that, we can tell skb_segment() to just split based on frag_list, trusting its sizes are already in accordance. This way SCTP can benefit from GSO and instead of passing several packets through the stack, it can pass a single large packet. v2: - Added support for receiving GSO frames, as requested by Dave Miller. - Clear skb->cb if packet is GSO (otherwise it's not used by SCTP) - Added heuristics similar to what we have in TCP for not generating single GSO packets that fills cwnd. v3: - consider sctphdr size in skb_gso_transport_seglen() - rebased due to 5c7cdf33 ("gso: Remove arbitrary checks for unsupported GSO") Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Tested-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
This patch is a preparation for the GSO one. In order to successfully handle GSO packets on rx path we must not call skb_linearize, otherwise it defeats any gain GSO may have had. This patch thus delays as much as possible the call to skb_linearize, leaving it to sctp_inq_pop() moment. For that the sanity checks performed now know how to deal with fragments. One positive side-effect of this is that if the socket is backlogged it will have the chance of doing it on backlog processing instead of during softirq. With this move, it's evident that a check for non-linearity in sctp_inq_pop was ineffective and is now removed. Note that a similar check is performed a bit below this one. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Tested-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
skb_gso_network_seglen is not enough for checking fragment sizes if skb is using GSO_BY_FRAGS as we have to check frag per frag. This patch introduces skb_gso_validate_mtu, based on the former, which will wrap the use case inside it as all calls to skb_gso_network_seglen were to validate if it fits on a given TMU, and improve the check. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Tested-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
This patch allows segmenting a skb based on its frags sizes instead of based on a fixed value. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Tested-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
sctp GSO requires it and sctp can be compiled as a module, so we need to export this function. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Tested-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marcelo Ricardo Leitner authored
NETIF_F_GSO_SOFTWARE was defined to list all GSO software types, so lets make use of it in loopback code. Note that veth/vxlan/others already uses it. Within this patch series, this patch causes lo to pick up SCTP GSO feature automatically (as it's added to NETIF_F_GSO_SOFTWARE) and thus avoiding segmentation if possible. Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Tested-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bhaktipriya Shridhar authored
alloc_workqueue replaces deprecated create_workqueue(). The workqueue adapter->txrx_wq has workitem &adapter->raise_intr_rxdata_task per adapter. Extended Socket Network Device is shared memory based, so someone's transmission denotes other's reception. raise_intr_rxdata_task raises interruption of receivers from the sender in order to notify receivers. The workqueue adapter->control_wq has workitem &adapter->interrupt_watch_task per adapter. interrupt_watch_task is used to prevent delay of interrupts. Dedicated workqueues have been used in both cases since the workitems on the workqueues are involved in normal device operation and require forward progress under memory pressure. max_active has been set to 0 since there is no need for throttling the number of active work items. Since network devices may be used for memory reclaim, WQ_MEM_RECLAIM has been set to guarantee forward progress. Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yuval Mintz authored
The New QED firmware contains several fixes, including: - Wrong classification of packets in 4-port devices. - Anti-spoof interoperability with encapsulated packets. - Tx-switching of encapsulated packets. It also slightly improves Tx performance of the device. In addition, this firmware contains the necessary logic for supporting iscsi & rdma, for which we plan on pushing protocol drivers in the imminent future. Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 02 Jun, 2016 4 commits
-
-
David Ahern authored
The VRF device exists to define L3 domains and guide FIB lookups. As such its operstate is not relevant. Seeing 'state UNKNOWN' in the output of 'ip link show' can be confusing, so set operstate at link create. Similarly, the MTU for a VRF device is not used; any fragmentation of the payload is done on the output path based on the real egress device. An MTU of 1500 on the VRF device while enslaved devices have a higher MTU can lead to confusion. Since the VRF MTU is not relevant set to 64k similar to what is done for loopback. Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Zhang Shengju authored
Set name_assign_type of internal port to NET_NAME_USER. Signed-off-by: Zhang Shengju <zhangshengju@cmss.chinamobile.com> Acked-by: Pravin B Shelar <pshelar@ovn.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bhaktipriya Shridhar authored
alloc_workqueue replaces deprecated create_workqueue(). A dedicated workqueue has been used since the workitems are involved in normal device operation. Workitems &priv->rx_work and &priv->tx_work, map to w5100_rx_work and w5100_tx_work respectively and are involved in receiving and transmitting packets. Forward progress under memory pressure is a requirement here. create_workqueue has been replaced with alloc_workqueue with max_active as 0 since there is no need for throttling the number of active work items. Since the driver may be used in memory reclaim path, WQ_MEM_RECLAIM has been set to guarantee forward progress. flush_workqueue is unnecessary since destroy_workqueue() itself calls drain_workqueue() which flushes repeatedly till the workqueue becomes empty. Hence the call to flush_workqueue() has been dropped. Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Masaru Nagai authored
Writing a non-zero value to the manual PAUSE frame register (MPR) starts the transmission of a PAUSE frame. A PAUSE frame is sent in ravb_emac_init(), but it is not expected behavior. Signed-off-by: Masaru Nagai <masaru.nagai.vx@renesas.com> Signed-off-by: Kazuya Mizuguchi <kazuya.mizuguchi.ks@renesas.com> Signed-off-by: Yoshihiro Kaneko <ykaneko0929@gmail.com> Acked-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-