- 15 Sep, 2010 18 commits
-
-
Matt Carlson authored
The tg3's phy routines define temporary variables in many locations within the same routine. This patch unifies all temporary variables into one location. Reviewed-by: Benjamin Li <benli@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
This patch eases stack pressure by dynamically allocating the memory used to temporarily store VPD data. Reviewed-by: Benjamin Li <benli@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
This patch converts the driver to prefer the skb_is_gso_v6() helper over the explicit inlined version. Reviewed-by: Benjamin Li <benli@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
Now that each NAPI instance has its own producer ring, it no longer makes sense to keep the producer ring structure external. This patch migrates the producer ring struct to tg3_napi and pivots the code to the new implementation. Reviewed-by: Benjamin Li <benli@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
TG3_IRQ_MAX_VECS should be seen as the maximum number of vectors that any device could be expected to use. tp->irq_max represents the maximum number of vectors the current device can use. This patch clarifies the semantics of the code to match the above description. Reviewed-by: Benjamin Li <benli@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
This patch adjusts the driver to use the tg3_start_xmit_dma_bug() transmit routine for all revisions of 5717 asic rev devices and then allows the driver to attach to B0 and later devices. Reviewed-by: Benjamin Li <benli@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
NCSI firmware does not accept APE events. It relies on a "driver state" location in shared memory to tell it what the driver's current state is. This patch pivots the code to use the new driver state scheme. Reviewed-by: Benjamin Li <benli@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
It was recently discovered that enabling TSS can lockup the device. This patch disables the feature until a suitable workaround can be found. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
Earlier versions of tg3 devices had a problem where the read DMA FIFO could be overrun in certain edge conditions. The fix was to limit the number of rx BDs the hardware would fetch at a time. For later devices (5761, 5784 and later ASIC revs), there is a hardware fix that must be enabled to fix the same problem. This patch adds that hardware fix. There is a gap in the ASIC revision lineage where neither fix is applied. This is intentional as these ASIC revisions are not afflicted by the bug. Reviewed-by: Benjamin Li <benli@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
andrew hendry authored
Connect already has socket locking. Signed-off-by: Andrew Hendry <andrew.hendry@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Hendry authored
Accept already has socket locking. [ Extend socket locking over TCP_LISTEN state test. -DaveM ] Signed-off-by: Andrew Hendry <andrew.hendry@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
andrew hendry authored
Accept updates socket values in 3 lines so wrapped with lock_sock. Signed-off-by: Andrew Hendry <andrew.hendry@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
andrew hendry authored
Listen updates socket values and needs lock_sock. Signed-off-by: Andrew Hendry <andrew.hendry@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Signed-off-by: Joe Perches <joe@perches.com> Acked-by: Ivo van Doorn <IvDoorn@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 14 Sep, 2010 11 commits
-
-
Jean Delvare authored
The code is quite convoluted, simplify it. This also avoids calling e1000_request_irq() without testing the value it returned, which was bad. Signed-off-by: Jean Delvare <jdelvare@suse.de> Cc: Bruce Allan <bruce.w.allan@intel.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Shevchenko authored
Signed-off-by: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dmitry Kravkov authored
Default number of rx buffers will be divided equally between allocated queues. This will decrease amount of pre-allocated buffers on systems with multiple CPUs. User can override this behavior with ethtool -G. Minimum amount of rx buffers per queue set to 128. Reported-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com> Signed-off-by: Eilon Greenstein <eilong@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ondrej Zary authored
Empty received URBs are currently counted as errors but the device sends them sometimes as part of regular traffic - so remove this check. Signed-off-by: Ondrej Zary <linux@rainbow-software.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ondrej Zary authored
Fix that usb_string() return value is not checked for error (negative value). Also change the ignore message a bit and lower its level to info. Signed-off-by: Ondrej Zary <linux@rainbow-software.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andreas Schwab authored
Modifying an object twice without an intervening sequence point is undefined. Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andreas Schwab authored
Modifying an object twice without an intervening sequence point is undefined. Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Denis Kirjanov authored
This patch to adds support for PM hooks into sundance driver Signed-off-by: Denis Kirjanov <dkirjanov@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Allocate hash tables for every online cpus, not every possible ones. NUMA aware allocations. Dont use a full page on arches where PAGE_SIZE > 1024*sizeof(void *) misc: __percpu , __read_mostly, __cpuinit annotations flow_compare_t is just an "unsigned long" Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 12 Sep, 2010 1 commit
-
-
David S. Miller authored
Reported-by: Jiri Slaby <jirislaby@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 10 Sep, 2010 10 commits
-
-
stephen hemminger authored
Now that est_tree_lock is acquired with BH protection, the other call is unnecessary. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Use rcu_dereference_rtnl() helper Change hard coded constants in fib_flag_trans() 7 -> RTN_UNREACHABLE 8 -> RTN_PROHIBIT Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
This requires some reorganisation of channel setup and teardown to ensure that we can always roll-back a failed change. Based on work by Steve Hodgson <shodgson@solarflare.com> Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steve Hodgson authored
- Allow the ring size to be specified in non power-of-two sizes (for instance to limit the amount of receive buffers). - Automatically size the event queue. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
This will allow for reallocation of channel structures and rings. Change module parameter separate_tx_channels to be read-only, since we now require its value to be constant. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
In preparation for changes to the way channels and queue structures are allocated, revise the macros and functions used to look up and iterator over them. - Replace efx_for_each_tx_queue() with iteration over channels then TX queues - Replace efx_for_each_rx_queue() with iteration over channels then RX queues (with one exception, shortly to be removed) - Introduce efx_get_{channel,rx_queue,tx_queue}() functions to look up channels and queues by index - Introduce efx_channel_get_{rx,tx}_queue() functions to look up a channel's queues Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
Currently we allocate DMA descriptor rings and event rings using pci_alloc_consistent() which selects non-blocking behaviour from the page allocator (GFP_ATOMIC). This is unnecessary, and since we currently allocate a single contiguous block for each ring (up to 32 pages!) these allocations are likely to fail if there is any significant memory pressure. Use dma_alloc_coherent() and GFP_KERNEL instead. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-