- 19 Jul, 2008 5 commits
-
-
Benjamin Li authored
Signed-off-by: Benjamin Li <benli@broadcom.com> Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Benjamin Li authored
This patch will fix the following sparse warnings: /home/benli/sparse/bnx2.c:297:8: warning: symbol 'val' shadows an earlier one /home/benli/sparse/bnx2.c:286:60: originally declared here /home/benli/sparse/bnx2.c:7461:7: warning: symbol 'i' shadows an earlier one /home/benli/sparse/bnx2.c:7265:10: originally declared here Signed-off-by: Benjamin Li <benli@broadcom.com> Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Benjamin Li authored
Signed-off-by: Benjamin Li <benli@broadcom.com> Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Benjamin Li authored
This change allows the first TX ring (CID 16) and the first TSS TX ring (CID 32) to be used concurrently. Before this change, we could get TSO errors when both TX rings were used concurrently. Signed-off-by: Benjamin Li <benli@broadcom.com> Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
With the recent changes to tx mutiqueue, e1000 was not calling netif_start_queue() before calling netif_wake_queue(). This causes an oops during loading of the driver. (Based on commit d55b53ff ("igb/ixgbe/e1000e: resolve tx multiqueue bug").) Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 18 Jul, 2008 35 commits
-
-
Jeff Kirsher authored
With the recent changes to tx mutiqueue, igb/ixgbe/e1000e was not calling netif_tx_start_all_queues() before calling netif_tx_wake_all_queues(). This causes an issue during loading of the driver. In addition, updated e1000e to use the updated tx mutliqueue api. Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
They are symmetrical to single_open ones :) Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
There are already 7 of them - time to kill some duplicate code. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
After all this stuff is moved outside, this function can look better. Besides, I tuned the error path in ip_proc_init_net to make it have only 2 exit points, not 3. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
This one has become per-net long ago, but the appropriate file is per-net only now. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
All the statistics shown in this file have been made per-net already. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
Now all the shown in it statistics is netnsizated, time to show it in appropriate net. The appropriate net init/exit ops already exist - they make the sockstat file per net - so just extend them. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
After moving all the stuff outside this function it looks a bit ugly - make it look better. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
Similar to... ouch, I repeat myself. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
Similar to ip and tcp ones :) Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
Similar to tcp one. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
Proc temporary uses stats from init_net. BTW, TCP_XXX_STATS are beautiful (w/o do { } while (0) facing) again :) Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
These ones are currently empty, but stuff from init_ipv4_mibs will sequentially migrate there. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
The only structure declared within is the netns_mib, which will carry all our mibs within. I didn't put the mibs in the existing netns_xxx structures to make it possible to mark this one as properly aligned and get in a separate "read-mostly" cache-line. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
This reverts commit 94d98424. Alan says it's not appropriate to remove this driver, Adrian Bunk also agrees with this revert. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Conflicts: Documentation/powerpc/booting-without-of.txt drivers/atm/Makefile drivers/net/fs_enet/fs_enet-main.c drivers/pci/pci-acpi.c net/8021q/vlan.c net/iucv/iucv.c
-
David S. Miller authored
Instead of 'pfifo_fast' we have just plain 'fifo_fast'. No priority queues, just a straight FIFO. This is necessary in order to legally have a seperate qdisc per queue in multi-TX-queue setups, and thus get full parallelization. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
We have to have exclusive access to the given qdisc anyways, so doing even more locking is superfluous. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Move the destruction of the old queue into qdisc_graft(). When operating on a root qdisc (ie. "parent == NULL"), apply the operation to all queues. The caller has grabbed a single implicit reference for this graft, therefore when we apply the change to more than one queue we must grab additional qdisc references. Otherwise, we are operating on a class of a specific parent qdisc, and therefore no multiqueue handling is necessary. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
We can simply use the qdisc->q.lock for all of the qdisc tree synchronization. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
No longer used. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Make sch_tree_lock() lock the qdisc's root. All of the users hold the RTNL semaphore and the root qdisc is not changing. Implement tbf_tree_{lock,unlock}() simply in terms of sch_tree_{lock,unlock}(). Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Lock the root of the qdisc being operated upon. All explicit references to qdisc_tree_lock() are now gone. The only remaining uses are via the sch_tree_{lock,unlock}() and tcf_tree_{lock,unlock}() macros. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
And give it it's own lock. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
It just wants the qdisc tree to be synchronized, so grabbing qdisc_root_lock() is sufficient. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
It just wants the qdisc tree for the filter to be synchronized. So just BH lock qdisc_root_lock(q) instead. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
This eliminates another qdisc_lock_tree user. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
This allows less strict control of access to the qdisc attached to a netdev_queue. It is even allowed to enqueue into a qdisc which is in the process of being destroyed. The RCU handler will toss out those packets. We will need this to handle sharing of a qdisc amongst multiple TX queues. In such a setup the lock has to be shared, so will be inside of the qdisc itself. At which point the netdev_queue lock cannot be used to hard synchronize access to the ->qdisc pointer. One operation we have to keep inside of qdisc_destroy() is the list deletion. It is the only piece of state visible after the RCU quiesce period, so we have to undo it early and under the appropriate locking. The operations in the RCU handler do not need any looking because the qdisc tree is no longer visible to anything at that point. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
We are registering the device, there is no way anyone can get at this object's qdiscs yet in any meaningful way. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
When we have shared qdiscs, packets come out of the qdiscs for multiple transmit queues. Therefore it doesn't make any sense to schedule the transmit queue when logically we cannot know ahead of time the TX queue of the SKB that the qdisc->dequeue() will give us. Just for sanity I added a BUG check to make sure we never get into a state where the noop_qdisc is scheduled. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
When code wants to lock the qdisc tree state, the logic operation it's doing is locking the top-level qdisc that sits of the root of the netdev_queue. Add qdisc_root_lock() to represent this and convert the easiest cases. In order for this to work out in all cases, we have to hook up the noop_qdisc to a dummy netdev_queue. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Currently it is associated with a netdev_queue, but when we have qdisc sharing that no longer makes any sense. Signed-off-by: David S. Miller <davem@davemloft.net>
-