- 29 Nov, 2011 40 commits
-
-
Eric Dumazet authored
Instead of using a custom flow dissector, use skb_flow_dissect() and benefit from tunnelling support. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Instead of using a custom flow dissector, use skb_flow_dissect() and benefit from tunnelling support. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
tcp_sendmsg() uses select_size() helper to choose skb head size when a new skb must be allocated. If GSO is enabled for the socket, current strategy is to force all payload data to be outside of headroom, in PAGE fragments. This strategy is not welcome for small packets, wasting memory. Experiments show that best results are obtained when using 2048 bytes for skb head (This includes the skb overhead and various headers) This patch provides better len/truesize ratios for packets sent to loopback device, and reduce memory needs for in-flight loopback packets, particularly on arches with big pages. If a sender sends many 1-byte packets to an unresponsive application, receiver rmem_alloc will grow faster and will stop queuing these packets sooner, or will collapse its receive queue to free excess memory. netperf -t TCP_RR results are improved by ~4 %, and many workloads are improved as well (tbench, mysql...) Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Le lundi 28 novembre 2011 à 19:06 -0500, David Miller a écrit : > From: Dimitris Michailidis <dm@chelsio.com> > Date: Mon, 28 Nov 2011 08:25:39 -0800 > > >> +bool skb_flow_dissect(const struct sk_buff *skb, struct flow_keys > >> *flow) > >> +{ > >> + int poff, nhoff = skb_network_offset(skb); > >> + u8 ip_proto; > >> + u16 proto = skb->protocol; > > > > __be16 instead of u16 for proto? > > I'll take care of this when I apply these patches. ( CC trimmed ) Thanks David ! Here is a small patch to use one 64bit load/store on x86_64 instead of two 32bit load/stores. [PATCH net-next] flow_dissector: use a 64bit load/store gcc compiler is smart enough to use a single load/store if we memcpy(dptr, sptr, 8) on x86_64, regardless of CONFIG_CC_OPTIMIZE_FOR_SIZE In IP header, daddr immediately follows saddr, this wont change in the future. We only need to make sure our flow_keys (src,dst) fields wont break the rule. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Changes to sfc to use byte queue limits. Signed-off-by: Tom Herbert <therbert@google.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Changes to bnx2x to use byte queue limits. Signed-off-by: Tom Herbert <therbert@google.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Changes to tg3 to use byte queue limits. Signed-off-by: Tom Herbert <therbert@google.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Changes to forcedeth to use byte queue limits. Signed-off-by: Tom Herbert <therbert@google.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Changes to e1000e to use byte queue limits. Signed-off-by: Tom Herbert <therbert@google.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Networking stack support for byte queue limits, uses dynamic queue limits library. Byte queue limits are maintained per transmit queue, and a dql structure has been added to netdev_queue structure for this purpose. Configuration of bql is in the tx-<n> sysfs directory for the queue under the byte_queue_limits directory. Configuration includes: limit_min, bql minimum limit limit_max, bql maximum limit hold_time, bql slack hold time Also under the directory are: limit, current byte limit inflight, current number of bytes on the queue Signed-off-by: Tom Herbert <therbert@google.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
This patch moves the xps specific parts in netdev_queue_release into its own function which netdev_queue_release can call. This allows netdev_queue_release to be more generic (for adding new attributes to tx queues). Signed-off-by: Tom Herbert <therbert@google.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Add interfaces for drivers to call for recording number of packets and bytes at send time and transmit completion. Also, added a function to "reset" a queue. These will be used by Byte Queue Limits. Signed-off-by: Tom Herbert <therbert@google.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Create separate queue state flags so that either the stack or drivers can turn on XOFF. Added a set of functions used in the stack to determine if a queue is really stopped (either by stack or driver) Signed-off-by: Tom Herbert <therbert@google.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
Implementation of dynamic queue limits (dql). This is a libary which allows a queue limit to be dynamically managed. The goal of dql is to set the queue limit, number of objects to the queue, to be minimized without allowing the queue to be starved. dql would be used with a queue which has these properties: 1) Objects are queued up to some limit which can be expressed as a count of objects. 2) Periodically a completion process executes which retires consumed objects. 3) Starvation occurs when limit has been reached, all queued data has actually been consumed but completion processing has not yet run, so queuing new data is blocked. 4) Minimizing the amount of queued data is desirable. A canonical example of such a queue would be a NIC HW transmit queue. The queue limit is dynamic, it will increase or decrease over time depending on the workload. The queue limit is recalculated each time completion processing is done. Increases occur when the queue is starved and can exponentially increase over successive intervals. Decreases occur when more data is being maintained in the queue than needed to prevent starvation. The number of extra objects, or "slack", is measured over successive intervals, and to avoid hysteresis the limit is only reduced by the miminum slack seen over a configurable time period. dql API provides routines to manage the queue: - dql_init is called to intialize the dql structure - dql_reset is called to reset dynamic values - dql_queued called when objects are being enqueued - dql_avail returns availability in the queue - dql_completed is called when objects have be consumed in the queue Configuration consists of: - max_limit, maximum limit - min_limit, minimum limit - slack_hold_time, time to measure instances of slack before reducing queue limit Signed-off-by: Tom Herbert <therbert@google.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rob Herring authored
Add support for the XGMAC 10Gb ethernet device in the Calxeda Highbank SOC. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rick Jones authored
Round-up some wayward "N/A" fw_version dust bunnies as part of that clean-up. Signed-off-by: Rick Jones <rick.jones2@hp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Neal Cardwell authored
Since 2005 (c1b4a7e6) tcp_tso_should_defer has been using tcp_max_burst() as a target limit for deciding how large to make outgoing TSO packets when not using sysctl_tcp_tso_win_divisor. But since 2008 (dd9e0dda) tcp_max_burst() returns the reordering degree. We should not have tcp_tso_should_defer attempt to build larger segments just because there is more reordering. This commit splits the notion of deferral size used in TSO from the notion of burst size used in cwnd moderation, and returns the TSO deferral limit to its original value. Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pascal Hambourg authored
Use memcmp() instead of cast to u16 when checking the PAD field. Signed-off-by: Pascal Hambourg <pascal@plouf.fr.eu.org> Signed-off-by: chas williams - CONTRACTOR <chas@cmf.nrl.navy.mil> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pascal Hambourg authored
Routed payload requires less headroom than bridged payload. So do not reallocate headroom if not needed. Also, add worst case AAL5 overhead to netdev->hard_header_len. Signed-off-by: Pascal Hambourg <pascal@plouf.fr.eu.org> Signed-off-by: chas williams - CONTRACTOR <chas@cmf.nrl.navy.mil> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
We can test/set multiple bits from sk_flags at once, to shorten a bit socket setup/dismantle phase. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Igor Maravic reported an error caused by jump_label_dec() being called from IRQ context : BUG: sleeping function called from invalid context at kernel/mutex.c:271 in_atomic(): 1, irqs_disabled(): 0, pid: 0, name: swapper 1 lock held by swapper/0: #0: (&n->timer){+.-...}, at: [<ffffffff8107ce90>] call_timer_fn+0x0/0x340 Pid: 0, comm: swapper Not tainted 3.2.0-rc2-net-next-mpls+ #1 Call Trace: <IRQ> [<ffffffff8104f417>] __might_sleep+0x137/0x1f0 [<ffffffff816b9a2f>] mutex_lock_nested+0x2f/0x370 [<ffffffff810a89fd>] ? trace_hardirqs_off+0xd/0x10 [<ffffffff8109a37f>] ? local_clock+0x6f/0x80 [<ffffffff810a90a5>] ? lock_release_holdtime.part.22+0x15/0x1a0 [<ffffffff81557929>] ? sock_def_write_space+0x59/0x160 [<ffffffff815e936e>] ? arp_error_report+0x3e/0x90 [<ffffffff810969cd>] atomic_dec_and_mutex_lock+0x5d/0x80 [<ffffffff8112fc1d>] jump_label_dec+0x1d/0x50 [<ffffffff81566525>] net_disable_timestamp+0x15/0x20 [<ffffffff81557a75>] sock_disable_timestamp+0x45/0x50 [<ffffffff81557b00>] __sk_free+0x80/0x200 [<ffffffff815578d0>] ? sk_send_sigurg+0x70/0x70 [<ffffffff815e936e>] ? arp_error_report+0x3e/0x90 [<ffffffff81557cba>] sock_wfree+0x3a/0x70 [<ffffffff8155c2b0>] skb_release_head_state+0x70/0x120 [<ffffffff8155c0b6>] __kfree_skb+0x16/0x30 [<ffffffff8155c119>] kfree_skb+0x49/0x170 [<ffffffff815e936e>] arp_error_report+0x3e/0x90 [<ffffffff81575bd9>] neigh_invalidate+0x89/0xc0 [<ffffffff81578dbe>] neigh_timer_handler+0x9e/0x2a0 [<ffffffff81578d20>] ? neigh_update+0x640/0x640 [<ffffffff81073558>] __do_softirq+0xc8/0x3a0 Since jump_label_{inc|dec} must be called from process context only, we must defer jump_label_dec() if net_disable_timestamp() is called from interrupt context. Reported-by: Igor Maravic <igorm@etf.rs> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Axel Lin authored
This patch converts the drivers in drivers/net/ethernet/* to use the module_platform_driver() macro which makes the code smaller and a bit simpler. Cc: "David S. Miller" <davem@davemloft.net> Cc: Pantelis Antoniou <pantelis.antoniou@gmail.com> Cc: Vitaly Bordug <vbordug@ru.mvista.com> Cc: Wan ZongShun <mcuos.com@gmail.com> Cc: Nicolas Pitre <nico@fluxnic.net> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Marc Kleine-Budde <mkl@pengutronix.de> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Jiri Pirko <jpirko@redhat.com> Cc: Daniel Hellstrom <daniel@gaisler.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Tobias Klauser <tklauser@distanz.ch> Cc: Grant Likely <grant.likely@secretlab.ca> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Richard Cochran <richard.cochran@omicron.at> Cc: Jonas Bonn <jonas@southpole.se> Cc: Sebastian Poehn <sebastian.poehn@belden.com> Cc: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Cc: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com> Cc: "Michał Mirosław" <mirq-linux@rere.qmqm.pl> Signed-off-by: Axel Lin <axel.lin@gmail.com> Acked-by: Wan ZongShun <mcuos.com@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Axel Lin authored
This patch converts the drivers in drivers/net/can/* to use the module_platform_driver() macro which makes the code smaller and a bit simpler. Cc: Wolfgang Grandegger <wg@grandegger.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Bhupesh Sharma <bhupesh.sharma@st.com> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Grant Likely <grant.likely@secretlab.ca> Cc: Anatolij Gustschin <agust@denx.de> Cc: Paul Bolle <pebolle@tiscali.nl> Cc: Kurt Van Dijck <kurt.van.dijck@eia.be> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Axel Lin <axel.lin@gmail.com> Acked-by: Marc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ralf Baechle authored
The Linux coding style wants the return statement on its own line. Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ralf Baechle authored
nr_route.ndigis is unsigned int so the nr_route.ndigis < 0 expression is never true and can be dropped. Doing the nr_ax25_dev_get call later allows the nr_route.ndigis test to bail out without having to dev_put. Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Cc: Thomas Osterried <thomas@osterried.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ralf Baechle authored
struct nr_route_struct's mnemonic permits a string of up to 7 bytes to be used. If userland passes a not zero terminated string to the kernel adding a node to the routing table might result in the kernel attempting to read copy a too long string. Mnemonic is part of the NET/ROM routing protocol; NET/ROM routing table updates only broadcast 6 bytes. The 7th byte in the mnemonic array exists only as a \0 termination character for the kernel code's convenience. Fixed by rejecting mnemonic strings that have no terminating \0 in the first 7 characters. Do this test only NETROM_NODE to avoid breaking NETROM_NEIGH where userland might passing an uninitialized mnemonic field. Initial patch by Dan Carpenter <dan.carpenter@oracle.com>. Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: Walter Harms <wharms@bfs.de> Cc: Thomas Osterried <thomas@osterried.de> Acked-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ralf Baechle authored
Very large, nonsenical arguments or use in very extreme conditions could result in integer overflows. Check ioctls arguments to avoid such overflows and return -EINVAL for too large arguments. To allow the use of AX.25 for even the most extreme setup (think packet radio to the Phase 5E mars probe) we make no further attempt to clamp the argument range. Originally reported by Fan Long <longfancn@gmail.com> and a first patch was sent by Xi Wang <xi.wang@gmail.com>. Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Cc: Xi Wang <xi.wang@gmail.com> Cc: Joerg Reuter <jreuter@yaina.de> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: Thomas Osterried <thomas@osterried.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
Support for specific hardware belongs under drivers/net/ not net/. Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Acked-by: Lennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
Any headers included by drivers should be under include/, and any definitions they use are not really private to the core as the name "dsa_priv.h" suggests. Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Acked-by: Lennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
I mistakenly exported functions from slave.c that are only called from dsa.c, part of the same module. Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Acked-by: Lennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
Tg3 normally gets a performance boost by increasing the PCI Maximum Read Request Size (MRRS) to 4k. Unfortunately, this is causing some problems on particular hardware platforms. This patch removes all code that modifies the MRRS except for one case. As part of a solution to fix an internal FIFO problem on the 5719, the driver artificially capped the MRRS to 2k for the entire 5719, and later 5720, ASIC revs. This was overly aggressive and only really needed to be done for the 5719 A0. In the spirit of the rest of this patch, the driver will only reprogram the MRRS for this device if the value exceeds the 2k cap. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
On the earliest TSO capable devices, TSO was accomplished through firmware. The TSO cannot coexist with ASF management firmware though. The tg3 driver determines whether or not ASF is enabled by calling tg3_get_eeprom_hw_cfg(), which checks a particular bit of NIC memory. Commit dabc5c67, entitled "tg3: Move TSO_CAPABLE assignment", accidentally moved the code that determines TSO capabilities earlier than the call to tg3_get_eeprom_hw_cfg(). As a consequence, the driver was attempting to determine TSO capabilities before it had all the data it needed to make the decision. This patch fixes the problem by revisiting and reevaluating the decision after tg3_get_eeprom_hw_cfg() is called. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Current SFB double hashing is not fulfilling SFB theory, if two flows share same rxhash value. Using skb_flow_dissect() permits to really have better hash dispersion, and get tunnelling support as well. Double hashing point was mentioned by Florian Westphal Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Instead of using a custom flow dissector, use skb_flow_dissect() and benefit from tunnelling support. This lack of tunnelling support was mentioned by Dan Siemon. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
No functional changes. This uses the code we factorized in skb_flow_dissect() Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
We use at least two flow dissectors in network stack, with known limitations and code duplication. Introduce skb_flow_dissect() to factorize this, highly inspired from existing dissector from __skb_get_rxhash() Note : We extensively use skb_header_pointer(), this permits us to not touch skb at all. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yaniv Rosner authored
Change comparison order such that the variable will come before the compared value. Signed-off-by: Yaniv Rosner <yanivr@broadcom.com> Signed-off-by: Eilon Greenstein <eilong@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yaniv Rosner authored
Fix spelling, alignment, empty lines, relocate the is_4_port_mode function, and split bnx2x_link_status_update function. Signed-off-by: Yaniv Rosner <yanivr@broadcom.com> Signed-off-by: Eilon Greenstein <eilong@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yaniv Rosner authored
Fix the MAC test of the 1G port of the BCM57800 to use the UMAC instead of the XMAC. Signed-off-by: Yaniv Rosner <yanivr@broadcom.com> Signed-off-by: Eilon Greenstein <eilong@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-