- 04 Sep, 2013 40 commits
-
-
Sucheta Chakraborty authored
This patch fixes warning "warning: symbol 'qlcnic_set_dcb_ops' was not declared. Should it be static?" Signed-off-by: Sucheta Chakraborty <sucheta.chakraborty@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
This was found with a manual audit and I don't have a reproducer. We limit ->calling_len and ->called_len when we get them from copy_from_user() in x25_ioctl() so when they come from skb->data then we should cap them there as well. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
"tty->name" and "name" are a 64 character buffers. My static checker complains because we add the "cf" on the front so it look like we are copying a 66 character string into a 64 character buffer. Also if the name is larger than IFNAMSIZ (16) it triggers a BUG_ON() inside the call to alloc_netdev(). This is all under CAP_SYS_ADMIN so it's not a security fix, it just adds a little robustness. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Anton Blanchard authored
The hypervisor is big endian, so little endian kernel builds need to byteswap. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jingoo Han authored
Casting from 'void *' is unnecessary, because casting from 'void *' to any pointer type is automatic. Reported-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Chan authored
Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Chan authored
Use bp->pfid from bnx2x instead to avoid duplication. Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Chan authored
Use BP_PORT and chip_port_mode directly from bnx2x.h to avoid duplication. Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Chan authored
to avoid duplication of the same logic. Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Chan authored
This eliminates duplication and ensures that all bnx2x chips will be supported. Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Veaceslav Falico authored
Currently we're linking upper devices to lower ones, which results in upside-down relationship: upper devices seeing lower devices via its upper lists. Fix this by correctly linking lower devices to the upper ones. CC: "David S. Miller" <davem@davemloft.net> CC: Eric Dumazet <edumazet@google.com> CC: Jiri Pirko <jiri@resnulli.us> CC: Alexander Duyck <alexander.h.duyck@intel.com> CC: Cong Wang <amwang@redhat.com> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
The goal of this patch is to harmonize cleanup done on a skbuff on rx path. Before this patch, behaviors were different depending of the tunnel type. Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
The goal of this patch is to harmonize cleanup done on a skbuff on xmit path. Before this patch, behaviors were different depending of the tunnel type. Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
This function was only used when a packet was sent to another netns. Now, it can also be used after tunnel encapsulation or decapsulation. Only skb_orphan() should not be done when a packet is not crossing netns. Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
This argument is not used, let's remove it. Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
This argument is not used, let's remove it. Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
nikolay@redhat.com authored
bond_compute_features is always called with RTNL held, so we can safely drop the read bond->lock. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
nikolay@redhat.com authored
We're protected by RTNL so nothing can happen and we can safely drop the read bond->lock. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
nikolay@redhat.com authored
We can drop the use of bond->lock for mutual exclusion in bond_3ad_update_lacp_rate and use RTNL in the sysfs store function instead. This way we'll prevent races with mode change and interface up/down as well as simplify update_lacp_rate by removing the check for port->slave because it'll always be initialized (done while enslaving with RTNL). This change will also help in the future removal of reader bond->lock from bond_enslave. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
nikolay@redhat.com authored
We don't have to release all slaves when closing the bond dev, so remove the outdated comment and the braces around the left single statement. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
nikolay@redhat.com authored
This patch aims to remove a use of the bond->lock for mutual exclusion which will later allow easier migration to RCU of the users of this functionality. We use RTNL as a synchronizing mechanism since it's always held when send_peer_notif is set, and when it is decremented from the notifier function. We can also drop some locking, and fix the leakage of the send_peer_notif counter. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jason Wang authored
As Michael point out, We used to limit the max pending DMAs to get better cache utilization. But it was not done correctly since it was one done when there's no new buffers submitted from guest. Guest can easily exceeds the limitation by keeping sending packets. So this patch moves the check into main loop. Tests shows about 5%-10% improvement on per cpu throughput for guest tx. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jason Wang authored
We used to poll vhost queue before making DMA is done, this is racy if vhost thread were waked up before marking DMA is done which can result the signal to be missed. Fix this by always polling the vhost thread before DMA is done. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jason Wang authored
Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if upend_idx != done_idx we still set zcopy_used to true and rollback this choice later. This could be avoided by determining zerocopy once by checking all conditions at one time before. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jason Wang authored
Let vhost_add_used() to use vhost_add_used_n() to reduce the code duplication. To avoid the overhead brought by __copy_to_user(). We will use put_user() when one used need to be added. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jason Wang authored
We tend to batch the used adding and signaling in vhost_zerocopy_callback() which may result more than 100 used buffers to be updated in vhost_zerocopy_signal_used() in some cases. So switch to use vhost_add_used_and_signal_n() to avoid multiple calls to vhost_add_used_and_signal(). Which means much less times of used index updating and memory barriers. 2% performance improvement were seen on netperf TCP_RR test. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jason Wang authored
None of its caller use its return value, so let it return void. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jingoo Han authored
Use the wrapper functions for getting and setting the driver data using pci_dev instead of using dev_{get,set}_drvdata() with &pdev->dev, so we can directly pass a struct pci_dev. This is a purely cosmetic change. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jingoo Han authored
Use the wrapper functions for getting and setting the driver data using pci_dev instead of using dev_{get,set}_drvdata() with &pdev->dev, so we can directly pass a struct pci_dev. This is a purely cosmetic change. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jingoo Han authored
Use the wrapper functions for getting and setting the driver data using platform_device instead of using dev_{get,set}_drvdata() with &pdev->dev, so we can directly pass a struct platform_device. This is a purely cosmetic change. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jingoo Han authored
Use the wrapper functions for getting and setting the driver data using platform_device instead of using dev_{get,set}_drvdata() with &pdev->dev, so we can directly pass a struct platform_device. This is a purely cosmetic change. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jingoo Han authored
Use the wrapper functions for getting and setting the driver data using platform_device instead of using dev_{get,set}_drvdata() with &pdev->dev, so we can directly pass a struct platform_device. This is a purely cosmetic change. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
This function is being removed, so remove the reference to it. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
This function is being removed, rename the reference. Signed-off-by: Joe Perches <joe@perches.com> Acked-by: Antonio Quartulli <ordex@autistici.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Convert the llc_<foo> static inlines to the equivalents from etherdevice.h and remove the llc_<foo> static inline functions. llc_mac_null -> is_zero_ether_addr llc_mac_multicast -> is_multicast_ether_addr llc_mac_match -> ether_addr_equal Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Use the new bool function ether_addr_equal to add some clarity and reduce the likelihood for misuse of compare_ether_addr for sorting. Done via cocci script: (and a little typing) $ cat compare_ether_addr.cocci @@ expression a,b; @@ - !compare_ether_addr(a, b) + ether_addr_equal(a, b) @@ expression a,b; @@ - compare_ether_addr(a, b) + !ether_addr_equal(a, b) @@ expression a,b; @@ - !ether_addr_equal(a, b) == 0 + ether_addr_equal(a, b) @@ expression a,b; @@ - !ether_addr_equal(a, b) != 0 + !ether_addr_equal(a, b) @@ expression a,b; @@ - ether_addr_equal(a, b) == 0 + !ether_addr_equal(a, b) @@ expression a,b; @@ - ether_addr_equal(a, b) != 0 + ether_addr_equal(a, b) @@ expression a,b; @@ - !!ether_addr_equal(a, b) + ether_addr_equal(a, b) Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Liu Junliang authored
Signed-off-by: Liu Junliang <liujunliang_ljl@163.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ajit Khaparde authored
SkyHawk-R can support VEB or VEPA mode. This patch will allow the user to set/query this switch setting. Signed-off-by: Ajit Khaparde <ajit.khaparde@emulex.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
The name of the function in the comment is __skb_alloc_page() while we are actually commenting __skb_alloc_pages(). Fix this typo and make it a valid kernel doc comment. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Claudiu Manoil authored
Fix the amount of sent bytes reported to BQL by reporting the number of bytes on wire in the xmit routine, and recording that value for each skb in order to be correctly confirmed on Tx confirmation cleanup. Reporting skb->len to BQL just before exiting xmit is not correct due to possible insertions of TOE block and alignment bytes in the skb->data, which are being stripped off by the controller before transmission on wire. This led to mismatch of (incorrectly) reported bytes to BQL b/w xmit and Tx confirmation, resulting in Tx timeout firing, for the h/w tx timestamping acceleration case. There's no easy way to obtain the number of bytes on wire in the Tx confirmation routine, so skb->cb is used to convey that information from xmit to Tx confirmation, for now (as proposed by Eric). Revived the currently unused GFAR_CB() construct for that purpose. Signed-off-by: Claudiu Manoil <claudiu.manoil@freescale.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-