- 22 Jun, 2011 3 commits
-
-
Paul Gortmaker authored
There are enough instances of this: iph->frag_off & htons(IP_MF | IP_OFFSET) that a helper function is probably warranted. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Randy Dunlap authored
Fix section mismatch warning: WARNING: drivers/net/irda/smsc-ircc2.o(.devinit.text+0x1a7): Section mismatch in reference from the function smsc_ircc_pnp_probe() to the function .init.text:smsc_ircc_open() Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexey Dobriyan authored
Remove linux/mm.h inclusion from netdevice.h -- it's unused (I've checked manually). To prevent mm.h inclusion via other channels also extract "enum dma_data_direction" definition into separate header. This tiny piece is what gluing netdevice.h with mm.h via "netdevice.h => dmaengine.h => dma-mapping.h => scatterlist.h => mm.h". Removal of mm.h from scatterlist.h was tried and was found not feasible on most archs, so the link was cutoff earlier. Hope people are OK with tiny include file. Note, that mm_types.h is still dragged in, but it is a separate story. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 21 Jun, 2011 37 commits
-
-
John Fastabend authored
Missing error checking before nla_parse_nested(). Reported-by: Mark Rustad <mark.d.rustad@intel.com> Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John Fastabend authored
Incorrect return type on dcb_setapp() this routine returns negative error codes. All call sites of dcb_setapp() assign the return value to an int already so no need to update drivers. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John Fastabend authored
With multiple APP entries per selector and protocol drivers or stacks may want to pick a specific value or stripe traffic across many priorities. Also if an APP entry in use is deleted the stack/driver may want to choose from the existing APP entries. To facilitate this and avoid having duplicate code to walk the APP ring provide a routine dcb_ieee_getapp_mask() to return a u8 bitmask of all priorities set for the specified selector and protocol. This routine and bitmask is a helper for DCB kernel users. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John Fastabend authored
Now that we allow multiple IEEE App entries we need a way to remove specific entries. To do this add the ieee_dcb_delapp() routine. Additionaly drivers may need to remove the APP entry from their firmware tables. Add dcb ops routine to handle this. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John Fastabend authored
This adds a setapp routine for IEEE802.1Qaz encoded APP data types. The IEEE 802.1Qaz spec encodes the priority bits differently and allows for multiple APP data entries of the same selector and protocol. Trying to force these to use the same set routines was becoming tedious. Furthermore, userspace could probably enforce the correct semantics, but expecting drivers to do this seems error prone in the firmware case. For these reasons add ieee_dcb_setapp() that understands the IEEE 802.1Qaz encoded form. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John Fastabend authored
Now that dcbnl is being used in many cases by more than a single agent it is beneficial to be notified when some entity either driver or user space has changed the DCB attributes. Today applications either end up polling the interface or relying on a user space database to maintain the DCB state and post events. Polling is a poor solution for obvious reasons. And relying on a user space database has its own downside. Namely it has created strange boot dependencies requiring the database be populated before any applications dependent on DCB attributes starts or the application goes into a polling loop. Populating the database requires negotiating link setting with the peer and can take anywhere from less than a second up to a few seconds depending on the switch implementation. Perhaps more importantly if another application or an embedded agent sets a DCB link attribute the database has no way of knowing other than polling the kernel. This prevents applications from responding quickly to changes in link events which at least in the FCoE case and probably any other protocols expecting a lossless link may result in IO errors. By adding a multicast group for DCB we have clean way to disseminate kernel DCB link attributes up to user space. Avoiding the need for user space to maintain a coherant database and disperse events that potentially do not reflect the current link state. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John Fastabend authored
Adding the capabilities bitmask to the get_ieee response allows user space to determine the current DCBX mode. Either CEE or IEEE this is useful with devices that support switching between modes where knowing the current state is relevant. Derived from work by Mark Rustad Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Chan authored
And change iSCSI RQ doorbell size from 16B to 64B to match new firmware. Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: Eddie Wai <eddie.wai@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Satoru Moriya authored
This patch adds 2 tracepoints to get a status of a socket receive queue and related parameter. One tracepoint is added to sock_queue_rcv_skb. It records rcvbuf size and its usage. The other tracepoint is added to __sk_mem_schedule and it records limitations of memory for sockets and current usage. By using these tracepoints we're able to know detailed reason why kernel drop the packet. Signed-off-by: Satoru Moriya <satoru.moriya@hds.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Satoru Moriya authored
This patch adds a tracepoint to __udp_queue_rcv_skb to get the return value of ip_queue_rcv_skb. It indicates why kernel drops a packet at this point. ip_queue_rcv_skb returns following values in the packet drop case: rcvbuf is full : -ENOMEM sk_filter returns error : -EINVAL, -EACCESS, -ENOMEM, etc. __sk_mem_schedule returns error: -ENOBUF Signed-off-by: Satoru Moriya <satoru.moriya@hds.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jesper Juhl authored
It was suggested by "make versioncheck" that the follwing includes of linux/version.h are redundant: /home/jj/src/linux-2.6/net/caif/caif_dev.c: 14 linux/version.h not needed. /home/jj/src/linux-2.6/net/caif/chnl_net.c: 10 linux/version.h not needed. /home/jj/src/linux-2.6/net/ipv4/gre.c: 19 linux/version.h not needed. /home/jj/src/linux-2.6/net/netfilter/ipset/ip_set_core.c: 20 linux/version.h not needed. /home/jj/src/linux-2.6/net/netfilter/xt_set.c: 16 linux/version.h not needed. and it seems that it is right. Beyond manually inspecting the source files I also did a few build tests with various configs to confirm that including the header in those files is indeed not needed. Here's a patch to remove the pointless includes. Signed-off-by: Jesper Juhl <jj@chaosbits.net> Acked-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Richard Cochran authored
This patch enables software (and phy device) transmit time stamping. Compile tested only. Signed-off-by: Richard Cochran <richard.cochran@omicron.at> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Richard Cochran authored
Because the socket buffer is freed in the completion interrupt, it is not safe to access it after submitting it to the hardware. Signed-off-by: Richard Cochran <richard.cochran@omicron.at> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
Convert xen driver to 64 bit statistics interface. Use stats_sync to ensure that 64 bit update is read atomically on 32 bit platform. Put hot statistics into per-cpu table. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
Need to add stat_sync wrapper around 64 bit statistic values. Fix wraparound bug in lockup detector where it is unsafely comparing 64 bit value that is not atomic. Since only care about detecting activity just looking at current low order bits will work. Remove unused entries in old vxge_sw_stats structure. Change the error counters to unsigned long since they won't grow so large as to have to be 64 bits. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
Convert input functional block device to use 64 bit stats. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Jamal Hadi Salim <hadi@cyberus.ca> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Unnecessary casts of void * clutter the code. These are the remainder casts after several specific patches to remove netdev_priv and dev_priv. Done via coccinelle script (and a little editing): $ cat cast_void_pointer.cocci @@ type T; T *pt; void *pv; @@ - pt = (T *)pv; + pt = pv; Signed-off-by: Joe Perches <joe@perches.com> Acked-by: Sjur Brændeland <sjur.brandeland@stericsson.com> Acked-By: Chris Snook <chris.snook@gmail.com> Acked-by: Jon Mason <jdmason@kudzu.us> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: David Dillow <dave@thedillows.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vasu Dev authored
Currently single PCI pool used across all CPUs and that doesn't scales up as number of CPU increases, so this patch adds per CPU PCI pool to setup udl and that aligns well from FCoE stack as that already has per CPU exch locking. Adds per CPU PCI alloc setup and free in ixgbe_fcoe_ddp_pools_alloc and ixgbe_fcoe_ddp_pools_free, use CPU specific pool during DDP setup. Re-arranged ixgbe_fcoe struct to have fewer holes along with adding pools ptr using pahole. Signed-off-by: Vasu Dev <vasu.dev@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Emil Tantilov authored
This patch adds support for Dell CEM (Comprehensive Embedded Management)). This consists of informing the management firmware of the driver version during probe on 82599 and X540 HW. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Evan Swanson <evan.swanson@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
John Fastabend authored
The ixgbe_dcb_txq_to_tc() routine was used to map TX rings to a DCB traffic class. Now that a tx_ring has a DCB traffic class associated with it this routine is no longer needed. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
John Fastabend authored
Now flow directors perfect filters features can coexist with DCB. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
John Fastabend authored
This bit mask is wrong DCBX_HOST is always set. It was missed up until now because lldpad reprograms the device on a link event. However this is still wrong and it is best not to be mis-configured for some time immediately following ixgbe_up(). Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
John Fastabend authored
Setup RSS redirection table to be compatible with multiple packet buffers. Currently, this works on 82599 devices because the RSS redirection index is masked by the number of queues per packet buffer. This sets the cap on the RSS table to maxq. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
John Fastabend authored
The tx_idx and rx_idx values are swapped on 82598 devices with DCB enabled. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
John Fastabend authored
The number of TX and RX queues allocated depends on the device type, the current features set, online CPUs, and various compile flags. To enable DCB with multiple queues and allow it to coexist with all the features currently implemented it has to setup a valid queue count. This is done at init time using the FDIR and RSS max queue counts and allowing each TC to allocate a queue per CPU. DCB will now use available queues up to (8 x TCs) this is somewhat arbitrary cap but allows DCB to use up to 64 queues. Its easy to increase this later if that is needed. This is prep work to enable Flow Director with DCB. After this DCB can easily coexist with existing features and no longer needs its own DCB feature ring. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
John Fastabend authored
ixgbe devices support different numbers of packet buffers either 8 or 4. Here we only allocate the minimal number of packet buffers required to implement the net_devices number of traffic classes. Fewer traffic classes allows for larger packet buffers in hardware. Also more Tx/Rx queues can be given to each traffic class. This patch is mostly about propagating the number of traffic classes through the init path. Specifically this adds the 4TC cases to the MRQC and MTQC setup routines. Also ixgbe_setup_tc() was sanitized to handle other traffic class value. Finally changing the number of packet buffers in the hardware requires the device to reinit. So this moves the reinit work from DCB into the main ixgbe_setup_tc() routine to consolidate the reset code. Now dcbnl_xxx ops call ixgbe_setup_tc() to configure packet buffers if needed. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
John Fastabend authored
The MRQC and MTQC registers are configured in the main setup path but are also reconfigured in the DCB setup path. The DCB path fixes the DCB configuration by configuring the SECTXMINIFG gap which is required for DCB pause to operate correctly. This patch reduces the duplicate code and does all setup in ixgbe_setup_mtqc() and ixgbe_setup_mrqc(). Additionally, this removes the IXGBE_QDE. This write never set the WRITE bit in the register so the write was not actually doing anything. Also this was to clear the register but, it is never set and defaults to zero. If this is needed for SRIOV it should be added correctly in a follow up patch. But it's never been working so removing it here should be OK. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
John Fastabend authored
Consolidate packet buffer allocation currently being done in the DCB path and main path. This allows the feature set and packet buffer requirements to be done once. This is prep work to allow DCB to coexist with other features namely, flow director. CC: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
John Fastabend authored
Replace duplicated code in if/else branches with single check and ixgbe_init_interrupt_scheme(). Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Stephen Hemminger authored
Use standard format for net_device_ops (without &) Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Acked-by: Greg Rose <Gregory.v.rose@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Greg Rose authored
In two places storage for mbx_ops is misidentified as type ixgbe_mac_operations. Reported-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Michał Mirosław authored
Private rx_csum flags are now duplicate of netdev->features & NETIF_F_RXCSUM. Removing this needs deeper surgery. Things noticed: - HW VLAN acceleration probably can be toggled, but it's left as is - the resets on RX csum offload change can probably be avoided - there is A LOT of copy-and-pasted code here Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Michał Mirosław authored
Private rx_csum flags are now duplicate of netdev->features & NETIF_F_RXCSUM. Removing this needs deeper surgery. Things noticed: - RX csum disabled by default - HW VLAN acceleration probably can be toggled, but it's left as is - the resets on RX csum offload change can probably be avoided - there is A LOT of copy-and-pasted code here Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
David S. Miller authored
Conflicts: drivers/net/wireless/iwlwifi/iwl-agn-rxon.c drivers/net/wireless/rtlwifi/pci.c net/netfilter/ipvs/ip_vs_core.c
-
Linus Torvalds authored
-
Linus Torvalds authored
Commit 13e12d14 ("vfs: reorganize 'struct inode' layout a bit") moved things around a bit changed i_state to be unsigned int instead of unsigned long. That was to help structure layout for the 64-bit case, and shrink 'struct inode' a bit (admittedly that only happened when spinlock debugging was on and i_flags didn't pack with i_lock). However, Meelis Roos reports that this results in unaligned exceptions on sprc, and it turns out that the bit-locking primitives that we use for the I_NEW bit want to use the bitops. Which want 'unsigned long', not 'unsigned int'. We really should fix the bit locking code to not have that kind of requirement, but that's a much bigger change. So for now, revert that field back to 'unsigned long' (but keep the other re-ordering changes from the commit that caused this). Andi points out that we have played games with this in 'struct page', so it's solvable with other hacks too, but since right now the struct inode size advantage only happens with some rare config options, it's not worth fighting. It _would_ be worth fixing the bitlocking code, though. Especially since there is no type safety in the bitlocking code (this never caused any warnings, and worked fine on x86-64, because the bitlocks take a 'void *' and x86-64 doesn't care that deeply about alignment). So it's currently a very easy problem to trigger by mistake and never notice. Reported-by: Meelis Roos <mroos@linux.ee> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Miller <davem@davemloft.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6Linus Torvalds authored
* 'drm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6: drm/radeon/kms/r6xx+: voltage fixes drm/nouveau: drop leftover debugging drm/radeon: avoid warnings from r600/eg irq handlers on powered off card. drm/radeon/kms: add missing param for dce3.2 DP transmitter setup drm/radeon/kms/atom: fix duallink on some early DCE3.2 cards drm/nouveau: fix assumption that semaphore dmaobj is valid in x-chan sync drm/nv50/disp: fix gamma with page flipping overlay turned on drm/nouveau/pm: Prevent overflow in nouveau_perf_init() drm/nouveau: fix big-endian switch
-