- 04 May, 2005 3 commits
-
-
Ed L Cashin authored
aoe-stat should work for built-in as well as module Signed-off-by: Ed L. Cashin <ecashin@coraid.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> diff -uprN a/Documentation/aoe/status.sh b/Documentation/aoe/status.sh
-
Ed L Cashin authored
improve allowed interfaces configuration Signed-off-by: Ed L. Cashin <ecashin@coraid.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> diff -uprN a/Documentation/aoe/aoe.txt b/Documentation/aoe/aoe.txt
-
- 03 May, 2005 37 commits
-
-
J Hadi Salim authored
Long standing bug. Policy to repeat an action never worked. Signed-off-by: J Hadi Salim <hadi@cyberus.ca> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
I found a bug that stopped IPsec/IPv6 from working. About a month ago IPv6 started using rt6i_idev->dev on the cached socket dst entries. If the cached socket dst entry is IPsec, then rt6i_idev will be NULL. Since we want to look at the rt6i_idev of the original route in this case, the easiest fix is to store rt6i_idev in the IPsec dst entry just as we do for a number of other IPv6 route attributes. Unfortunately this means that we need some new code to handle the references to rt6i_idev. That's why this patch is bigger than it would otherwise be. I've also done the same thing for IPv4 since it is conceivable that once these idev attributes start getting used for accounting, we probably need to dereference them for IPv4 IPsec entries too. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stephen Hemminger authored
Fix qlen underrun when doing duplication with netem. If netem is used as leaf discipline, then the parent needs to be tweaked when packets are duplicated. Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stephen Hemminger authored
Netem currently dumps packets into the queue when timer expires. This patch makes work by self-clocking (more like TBF). It fixes a bug when 0 delay is requested (only doing loss or duplication). Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stephen Hemminger authored
Due to bugs in netem (fixed by later patches), it is possible to get qdisc qlen to go negative. If this happens the CPU ends up spinning forever in qdisc_run(). So add a BUG_ON() to trap it. Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Patrick McHardy authored
Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Patrick McHardy authored
Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tommy S. Christensen authored
Some network drivers call netif_stop_queue() when detecting loss of carrier. This leads to packets being queued up at the qdisc level for an unbound period of time. In order to prevent this effect, the core networking stack will now cease to queue packets for any device, that is operationally down (i.e. the queue is flushed and disabled). Signed-off-by: Tommy S. Christensen <tommy.christensen@tpack.net> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
If we free up a partially processed packet because it's skb->len dropped to zero, we need to decrement qlen because we are dropping out of the top-level loop so it will do the decrement for us. Spotted by Herbert Xu. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
The qlen should continue to decrement, even if we pop partially processed SKBs back onto the receive queue. Signed-off-by: David S. Miller <davem@davemloft.net>
-
Linus Torvalds authored
-
Nicolas Pitre authored
Patch from Nicolas Pitre Signed-off-by: Nicolas Pitre Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Sascha Hauer authored
Patch from Sascha Hauer This patch adds the missing include files for the i.MX framebuffer driver. Signed-off-by: Sascha Hauer Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Herbert Xu authored
Let's recap the problem. The current asynchronous netlink kernel message processing is vulnerable to these attacks: 1) Hit and run: Attacker sends one or more messages and then exits before they're processed. This may confuse/disable the next netlink user that gets the netlink address of the attacker since it may receive the responses to the attacker's messages. Proposed solutions: a) Synchronous processing. b) Stream mode socket. c) Restrict/prohibit binding. 2) Starvation: Because various netlink rcv functions were written to not return until all messages have been processed on a socket, it is possible for these functions to execute for an arbitrarily long period of time. If this is successfully exploited it could also be used to hold rtnl forever. Proposed solutions: a) Synchronous processing. b) Stream mode socket. Firstly let's cross off solution c). It only solves the first problem and it has user-visible impacts. In particular, it'll break user space applications that expect to bind or communicate with specific netlink addresses (pid's). So we're left with a choice of synchronous processing versus SOCK_STREAM for netlink. For the moment I'm sticking with the synchronous approach as suggested by Alexey since it's simpler and I'd rather spend my time working on other things. However, it does have a number of deficiencies compared to the stream mode solution: 1) User-space to user-space netlink communication is still vulnerable. 2) Inefficient use of resources. This is especially true for rtnetlink since the lock is shared with other users such as networking drivers. The latter could hold the rtnl while communicating with hardware which causes the rtnetlink user to wait when it could be doing other things. 3) It is still possible to DoS all netlink users by flooding the kernel netlink receive queue. The attacker simply fills the receive socket with a single netlink message that fills up the entire queue. The attacker then continues to call sendmsg with the same message in a loop. Point 3) can be countered by retransmissions in user-space code, however it is pretty messy. In light of these problems (in particular, point 3), we should implement stream mode netlink at some point. In the mean time, here is a patch that implements synchronous processing. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
Here is a little optimisation for the cb_lock used by netlink_dump. While fixing that race earlier, I noticed that the reference count held by cb_lock is completely useless. The reason is that in order to obtain the protection of the reference count, you have to take the cb_lock. But the only way to take the cb_lock is through dereferencing the socket. That is, you must already possess a reference count on the socket before you can take advantage of the reference count held by cb_lock. As a corollary, we can remve the reference count held by the cb_lock. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Patrick McHardy authored
Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Asim Shankar authored
htb_enqueue(): Free skb and return NET_XMIT_DROP if a packet is destined for the direct_queue but the direct_queue is full. (Before this: erroneously returned NET_XMIT_SUCCESS even though the packet was not enqueued) Signed-off-by: Asim Shankar <asimshankar@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jesper Juhl authored
kfree() and vfree() can both deal with NULL pointers. This patch removes redundant NULL pointer checks from the ppp code in drivers/net/ Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Folkert van Heusden authored
Signed-off-by: Folkert van Heusden <folkert@vanheusden.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Folkert van Heusden authored
Signed-off-by: Folkert van Heusden <folkert@vanheusden.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Lucas Correia Villa Real authored
This is a trivial fix for a typo on Kconfig, where the Generic Random Early Detection algorithm is abbreviated as RED instead of GRED. Signed-off-by: Lucas Correia Villa Real <lucasvr@gobolinux.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jesper Juhl authored
kfree(0) is perfectly valid, checking pointers for NULL before calling kfree() on them is redundant. The patch below cleans away a few such redundant checks (and while I was around some of those bits I couldn't stop myself from making a few tiny whitespace changes as well). Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk> Acked-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Converts remaining rtnetlink_link tables to use c99 designated initializers to make greping a little bit easier. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Converts rtm_min and rtm_max arrays to use c99 designated initializers for easier insertion of new message families. RTM_GETMULTICAST and RTM_GETANYCAST did not have the minimal message size specified which means that the netlink message was parsed for routing attributes starting from the header. Adds the proper minimal message sizes for these messages (netlink header + common rtnetlink header) to fix this issue. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
RTM_MAX is currently set to the maximum reserverd message type plus one thus being the cause of two bugs for new types being assigned a) given the new family registers only the NEW command in its reserved block the array size for per family entries is calculated one entry short and b) given the new family registers all commands RTM_MAX would point to the first entry of the block following this one and the rtnetlink receive path would accept a message type for a nonexisting family. This patch changes RTM_MAX to point to the maximum valid message type by aligning it to the start of the next block and subtracting one. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Converts xfrm_msg_min and xfrm_dispatch to use c99 designated initializers to make greping a little bit easier. Also replaces two hardcoded message type with meaningful names. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Makes the type > XFRM_MSG_MAX check behave correctly to protect access to xfrm_dispatch. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
This patch includes net/ipv6.h from addrconf.h since it needs ipv6_addr_set. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
I made a mistake in my last patch to the raw socket checksum code. I used the value of inet->cork.length as the length of the payload. While this works with normal packets, it breaks down when IPsec is present since the cork length includes the extension header length. So here is a patch to fix the length calculations. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Patrick McHardy authored
Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Patrick McHardy authored
Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Benjamin Herrenschmidt authored
gcc-4.0 generates altivec code implicitly when -mcpu indicates an altivec capable CPU which is not suitable for the kernel. However, we used to set -mcpu=970 when CONFIG_ALTIVEC was set because a gcc-3.x bug prevented from using -maltivec along with -mcpu=power4, thus prevented building the RAID6 altivec code. This patch fixes all of this by testing for the gcc version. If 4.0 or later, just normally use -mcpu=power4 and let the RAID6 code add -maltivec to the few files it needs to be compiled with altivec support. For 3.x, we still use -mcpu=970 to work around the above problem, which is fine as 3.x will never implicitly generate altivec code. The Makefile hackery may not be the most lovely, I welcome anybody more skilled than me to improve it. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Russell King authored
We use one kmalloc to allocate two structures needlessly. Combine these two structures into one. Signed-off-by: Russell King <rmk@arm.linux.org.uk>
-
Russell King authored
Signed-off-by: Russell King <rmk@arm.linux.org.uk>
-
Russell King authored
VMALLOC_START and VMALLOC_OFFSET are common between all ARM machine classes. Move them into include/asm-arm/pgtable.h, but allow a machine class to override them if required. Signed-off-by: Russell King <rmk@arm.linux.org.uk>
-
Russell King authored
Rather than duplicate the assembly for debug macros in the decompressor head.S, use asm/arch/debug-macros.S instead. Signed-off-by: Russell King <rmk@arm.linux.org.uk>
-
Dave Kleikamp authored
Modify xtSearch so that it returns the next allocated block when the requested block is unmapped. This can be used to make sure we don't create a new extent that overlaps the next one. Signed-off-by: Dave Kleikamp <shaggy@austin.ibm.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-