- 02 Sep, 2014 40 commits
-
-
H. Peter Anvin authored
commit 197725de upstream. Make espfix64 a hidden Kconfig option. This fixes the x86-64 UML build which had broken due to the non-existence of init_espfix_bsp() in UML: since UML uses its own Kconfig, this option does not appear in the UML build. This also makes it possible to make support for 16-bit segments a configuration option, for the people who want to minimize the size of the kernel. Reported-by:
Ingo Molnar <mingo@kernel.org> Signed-off-by:
H. Peter Anvin <hpa@zytor.com> Cc: Richard Weinberger <richard@nod.at> Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com [ luis: backported to 3.11: adjusted context ] Signed-off-by:
Luis Henriques <luis.henriques@canonical.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
H. Peter Anvin authored
commit 20b68535 upstream. Header guard is #ifndef, not #ifdef... Reported-by:
Fengguang Wu <fengguang.wu@intel.com> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
H. Peter Anvin authored
commit e1fe9ed8 upstream. Sparse warns that the percpu variables aren't declared before they are defined. Rather than hacking around it, move espfix definitions into a proper header file. Reported-by:
Fengguang Wu <fengguang.wu@intel.com> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
H. Peter Anvin authored
commit 3891a04a upstream. The IRET instruction, when returning to a 16-bit segment, only restores the bottom 16 bits of the user space stack pointer. This causes some 16-bit software to break, but it also leaks kernel state to user space. We have a software workaround for that ("espfix") for the 32-bit kernel, but it relies on a nonzero stack segment base which is not available in 64-bit mode. In checkin: b3b42ac2 x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels we "solved" this by forbidding 16-bit segments on 64-bit kernels, with the logic that 16-bit support is crippled on 64-bit kernels anyway (no V86 support), but it turns out that people are doing stuff like running old Win16 binaries under Wine and expect it to work. This works around this by creating percpu "ministacks", each of which is mapped 2^16 times 64K apart. When we detect that the return SS is on the LDT, we copy the IRET frame to the ministack and use the relevant alias to return to userspace. The ministacks are mapped readonly, so if IRET faults we promote #GP to #DF which is an IST vector and thus has its own stack; we then do the fixup in the #DF handler. (Making #GP an IST exception would make the msr_safe functions unsafe in NMI/MC context, and quite possibly have other effects.) Special thanks to: - Andy Lutomirski, for the suggestion of using very small stack slots and copy (as opposed to map) the IRET frame there, and for the suggestion to mark them readonly and let the fault promote to #DF. - Konrad Wilk for paravirt fixup and testing. - Borislav Petkov for testing help and useful comments. Reported-by:
Brian Gerst <brgerst@gmail.com> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Andrew Lutomriski <amluto@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Dirk Hohndel <dirk@hohndel.org> Cc: Arjan van de Ven <arjan.van.de.ven@intel.com> Cc: comex <comexk@gmail.com> Cc: Alexander van Heukelum <heukelum@fastmail.fm> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> [ luis: backported to 3.11: adjusted context ] Signed-off-by:
Luis Henriques <luis.henriques@canonical.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
H. Peter Anvin authored
commit 7ed6fb9b upstream. This reverts commit fa81511b in preparation of merging in the proper fix (espfix64). Signed-off-by:
H. Peter Anvin <hpa@zytor.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Jeff Layton authored
commit 6aa23d76 upstream. Currently, the client will attempt to use krb5i in the SETCLIENTID call even if rpc.gssd isn't running. When that fails, it'll then fall back to RPC_AUTH_UNIX. This introduced a delay when mounting if rpc.gssd isn't running, and causes warning messages to pop up in the ring buffer. Check to see if rpc.gssd is running before even attempting to use krb5i auth, and just silently skip trying to do so if it isn't. In the event that the admin is actually trying to mount with krb5*, it will still fail at a later stage of the mount attempt. Signed-off-by:
Jeff Layton <jlayton@redhat.com> Signed-off-by:
Trond Myklebust <Trond.Myklebust@netapp.com> Cc: Stefan Bader <stefan.bader@canonical.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Jeff Layton authored
commit 89f84243 upstream. Now that we have a more reliable method to tell if gssd is running, we can replace the sn->gssd_running flag with a function that will query to see if it's up and running. There's also no need to attempt an upcall that we know will fail, so just return -EACCES if gssd isn't running. Finally, fix the warn_gss() message not to claim that that the upcall timed out since we don't necesarily perform one now when gssd isn't running, and remove the extraneous newline from the message. Signed-off-by:
Jeff Layton <jlayton@redhat.com> Signed-off-by:
Trond Myklebust <Trond.Myklebust@netapp.com> [ kamal: 3.13-stable prereq for 6aa23d76 "nfs: check if gssd is running before attempting to use krb5i auth in SETCLIENTID call" ] Cc: Stefan Bader <stefan.bader@canonical.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Jeff Layton authored
commit 4b9a445e upstream. rpc.gssd will naturally hold open any pipe named */clnt*/gssd that shows up under rpc_pipefs. That behavior gives us a reliable mechanism to tell whether it's actually running or not. Create a new toplevel "gssd" directory in rpc_pipefs when it's mounted. Under that directory create another directory called "clntXX", and then within that a pipe called "gssd". We'll never send an upcall along that pipe, and any downcall written to it will just return -EINVAL. Signed-off-by:
Jeff Layton <jlayton@redhat.com> Signed-off-by:
Trond Myklebust <Trond.Myklebust@netapp.com> [ kamal: 3.13-stable prereq for 6aa23d76 "nfs: check if gssd is running before attempting to use krb5i auth in SETCLIENTID call" ] Cc: Stefan Bader <stefan.bader@canonical.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Andrey Utkin authored
[ Upstream commit 093758e3 ] This commit is a guesswork, but it seems to make sense to drop this break, as otherwise the following line is never executed and becomes dead code. And that following line actually saves the result of local calculation by the pointer given in function argument. So the proposed change makes sense if this code in the whole makes sense (but I am unable to analyze it in the whole). Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=81641Reported-by:
David Binderman <dcb314@hotmail.com> Signed-off-by:
Andrey Utkin <andrey.krieger.utkin@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Sowmini Varadhan authored
[ Upstream commit 4ec1b010 ] The LDC handshake could have been asynchronously triggered after ldc_bind() enables the ldc_rx() receive interrupt-handler (and thus intercepts incoming control packets) and before vio_port_up() calls ldc_connect(). If that is the case, ldc_connect() should return 0 and let the state-machine progress. Signed-off-by:
Sowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by:
Karl Volz <karl.volz@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Christopher Alexander Tobias Schulze authored
[ Upstream commit fe418231 ] Fix detection of BREAK on sunsab serial console: BREAK detection was only performed when there were also serial characters received simultaneously. To handle all BREAKs correctly, the check for BREAK and the corresponding call to uart_handle_break() must also be done if count == 0, therefore duplicate this code fragment and pull it out of the loop over the received characters. Patch applies to 3.16-rc6. Signed-off-by:
Christopher Alexander Tobias Schulze <cat.schulze@alice-dsl.net> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Christopher Alexander Tobias Schulze authored
[ Upstream commit 5cdceab3 ] Fix regression in bbc i2c temperature and fan control on some Sun systems that causes the driver to refuse to load due to the bbc_i2c_bussel resource not being present on the (second) i2c bus where the temperature sensors and fan control are located. (The check for the number of resources was removed when the driver was ported to a pure OF driver in mid 2008.) Signed-off-by:
Christopher Alexander Tobias Schulze <cat.schulze@alice-dsl.net> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit 4ca9a237 ] Based almost entirely upon a patch by Christopher Alexander Tobias Schulze. In commit db64fe02 ("mm: rewrite vmap layer") lazy VMAP tlb flushing was added to the vmalloc layer. This causes problems on sparc64. Sparc64 has two VMAP mapped regions and they are not contiguous with eachother. First we have the malloc mapping area, then another unrelated region, then the vmalloc region. This "another unrelated region" is where the firmware is mapped. If the lazy TLB flushing logic in the vmalloc code triggers after we've had both a module unload and a vfree or similar, it will pass an address range that goes from somewhere inside the malloc region to somewhere inside the vmalloc region, and thus covering the openfirmware area entirely. The sparc64 kernel learns about openfirmware's dynamic mappings in this region early in the boot, and then services TLB misses in this area. But openfirmware has some locked TLB entries which are not mentioned in those dynamic mappings and we should thus not disturb them. These huge lazy TLB flush ranges causes those openfirmware locked TLB entries to be removed, resulting in all kinds of problems including hard hangs and crashes during reboot/reset. Besides causing problems like this, such huge TLB flush ranges are also incredibly inefficient. A plea has been made with the author of the VMAP lazy TLB flushing code, but for now we'll put a safety guard into our flush_tlb_kernel_range() implementation. Since the implementation has become non-trivial, stop defining it as a macro and instead make it a function in a C source file. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit 18f38132 ] The assumption was that update_mmu_cache() (and the equivalent for PMDs) would only be called when the PTE being installed will be accessible by the user. This is not true for code paths originating from remove_migration_pte(). There are dire consequences for placing a non-valid PTE into the TSB. The TLB miss frramework assumes thatwhen a TSB entry matches we can just load it into the TLB and return from the TLB miss trap. So if a non-valid PTE is in there, we will deadlock taking the TLB miss over and over, never satisfying the miss. Just exit early from update_mmu_cache() and friends in this situation. Based upon a report and patch from Christopher Alexander Tobias Schulze. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit 5aa4ecfd ] This is the prevent previous stores from overlapping the block stores done by the memcpy loop. Based upon a glibc patch by Jose E. Marchesi Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit b18eb2d7 ] Access to the TSB hash tables during TLB misses requires that there be an atomic 128-bit quad load available so that we fetch a matching TAG and DATA field at the same time. On cpus prior to UltraSPARC-III only virtual address based quad loads are available. UltraSPARC-III and later provide physical address based variants which are easier to use. When we only have virtual address based quad loads available this means that we have to lock the TSB into the TLB at a fixed virtual address on each cpu when it runs that process. We can't just access the PAGE_OFFSET based aliased mapping of these TSBs because we cannot take a recursive TLB miss inside of the TLB miss handler without risking running out of hardware trap levels (some trap combinations can be deep, such as those generated by register window spill and fill traps). Without huge pages it's working perfectly fine, but when the huge TSB got added another chunk of fixed virtual address space was not allocated for this second TSB mapping. So we were mapping both the 8K and 4MB TSBs to the same exact virtual address, causing multiple TLB matches which gives undefined behavior. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit e5c460f4 ] This was found using Dave Jone's trinity tool. When a user process which is 32-bit performs a load or a store, the cpu chops off the top 32-bits of the effective address before translating it. This is because we run 32-bit tasks with the PSTATE_AM (address masking) bit set. We can't run the kernel with that bit set, so when the kernel accesses userspace no address masking occurs. Since a 32-bit process will have no mappings in that region we will properly fault, so we don't try to handle this using access_ok(), which can safely just be a NOP on sparc64. Real faults from 32-bit processes should never generate such addresses so a bug check was added long ago, and it barks in the logs if this happens. But it also barks when a kernel user access causes this condition, and that _can_ happen. For example, if a pointer passed into a system call is "0xfffffffc" and the kernel access 4 bytes offset from that pointer. Just handle such faults normally via the exception entries. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit fe866433 ] pte_ERROR() is not used anywhere, delete it. For pgd_ERROR() and pmd_ERROR(), output something similar to x86, giving the address of the pgd/pmd as well as it's value. Also provide the caller, since these macros are invoked from pgd_clear_bad() and pmd_clear_bad() which provides little context as to what high level operation was occuring when the BAD state was detected. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit 26cf4325 ] Instead of returning false we should at least check the most basic things, otherwise page table corruptions will be very difficult to debug. PMD and PTE tables are of size PAGE_SIZE, so none of the sub-PAGE_SIZE bits should be set. We also complement this with a check that the physical address the pud/pmd points to is valid memory. PowerPC was used as a guide while implementating this. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit 0eef331a ] Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit ee73887e ] In commit b2d43834 ("sparc64: Make PAGE_OFFSET variable."), the MAX_PHYS_ADDRESS_BITS value was increased (to 47). This constant reference to '41UL' was missed. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit 70ffc6eb ] Make get_user_insn() able to cope with huge PMDs. Next, make do_fault_siginfo() more robust when get_user_insn() can't actually fetch the instruction. In particular, use the MMU announced fault address when that happens, instead of calling compute_effective_address() and computing garbage. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit d037d163 ] If we have a 32-bit task we must chop off the top 32-bits of the 64-bit value just as the cpu would. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit eaf85da8 ] Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit c2e4e676 ] When _PAGE_SPECIAL and _PAGE_PMD_HUGE were added to the mask, the comment was not updated. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit 04df419d ] The large PMD path needs to check _PAGE_VALID not _PAGE_PRESENT, to decide if it needs to bail and return 0. pmd_large() should therefore just check _PAGE_PMD_HUGE. Calls to gup_huge_pmd() are guarded with a check of pmd_large(), so we just need to add a valid bit check. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit 51e5ef1b ] On sparc64 "present" and "valid" are seperate PTE bits, this allows us to naturally distinguish between the user explicitly asking for PROT_NONE with mprotect() and other situations. However we weren't handling this properly in the huge PMD paths. First of all, the page table walker in the TSB miss path only checks for _PAGE_PMD_HUGE. So the generic pmdp_invalidate() would clear _PAGE_PRESENT but the TLB miss paths would still load it into the TLB as a valid huge PMD. Fix this by clearing the valid bit in pmdp_invalidate(), and also checking the valid bit in USER_PGTABLE_CHECK_PMD_HUGE using "brgez" since _PAGE_VALID is bit 63 in both the sun4u and sun4v pte layouts. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit 5b1e94fa ] This code was mistakenly using the exec bit from the PMD in all cases, even when the PMD isn't a huge PMD. If it's not a huge PMD, test the exec bit in the individual ptes down in tlb_batch_pmd_scan(). Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Kirill Tkhai authored
[ Upstream commit 49b6c01f ] One more place where we must not be able to be preempted or to be interrupted in RT. Always actually disable interrupts during synchronization cycle. Signed-off-by:
Kirill Tkhai <tkhai@yandex.ru> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
David S. Miller authored
[ Upstream commit aa3449ee ] Only the second argument, 'op', is signed. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Eric Dumazet authored
[ Upstream commit 757efd32 ] Dave reported following splat, caused by improper use of IP_INC_STATS_BH() in process context. BUG: using __this_cpu_add() in preemptible [00000000] code: trinity-c117/14551 caller is __this_cpu_preempt_check+0x13/0x20 CPU: 3 PID: 14551 Comm: trinity-c117 Not tainted 3.16.0+ #33 ffffffff9ec898f0 0000000047ea7e23 ffff88022d32f7f0 ffffffff9e7ee207 0000000000000003 ffff88022d32f818 ffffffff9e397eaa ffff88023ee70b40 ffff88022d32f970 ffff8801c026d580 ffff88022d32f828 ffffffff9e397ee3 Call Trace: [<ffffffff9e7ee207>] dump_stack+0x4e/0x7a [<ffffffff9e397eaa>] check_preemption_disabled+0xfa/0x100 [<ffffffff9e397ee3>] __this_cpu_preempt_check+0x13/0x20 [<ffffffffc0839872>] sctp_packet_transmit+0x692/0x710 [sctp] [<ffffffffc082a7f2>] sctp_outq_flush+0x2a2/0xc30 [sctp] [<ffffffff9e0d985c>] ? mark_held_locks+0x7c/0xb0 [<ffffffff9e7f8c6d>] ? _raw_spin_unlock_irqrestore+0x5d/0x80 [<ffffffffc082b99a>] sctp_outq_uncork+0x1a/0x20 [sctp] [<ffffffffc081e112>] sctp_cmd_interpreter.isra.23+0x1142/0x13f0 [sctp] [<ffffffffc081c86b>] sctp_do_sm+0xdb/0x330 [sctp] [<ffffffff9e0b8f1b>] ? preempt_count_sub+0xab/0x100 [<ffffffffc083b350>] ? sctp_cname+0x70/0x70 [sctp] [<ffffffffc08389ca>] sctp_primitive_ASSOCIATE+0x3a/0x50 [sctp] [<ffffffffc083358f>] sctp_sendmsg+0x88f/0xe30 [sctp] [<ffffffff9e0d673a>] ? lock_release_holdtime.part.28+0x9a/0x160 [<ffffffff9e0d62ce>] ? put_lock_stats.isra.27+0xe/0x30 [<ffffffff9e73b624>] inet_sendmsg+0x104/0x220 [<ffffffff9e73b525>] ? inet_sendmsg+0x5/0x220 [<ffffffff9e68ac4e>] sock_sendmsg+0x9e/0xe0 [<ffffffff9e1c0c09>] ? might_fault+0xb9/0xc0 [<ffffffff9e1c0bae>] ? might_fault+0x5e/0xc0 [<ffffffff9e68b234>] SYSC_sendto+0x124/0x1c0 [<ffffffff9e0136b0>] ? syscall_trace_enter+0x250/0x330 [<ffffffff9e68c3ce>] SyS_sendto+0xe/0x10 [<ffffffff9e7f9be4>] tracesys+0xdd/0xe2 This is a followup of commits f1d8cba6 ("inet: fix possible seqlock deadlocks") and 7f88c6b2 ("ipv6: fix possible seqlock deadlock in ip6_finish_output2") Signed-off-by:
Eric Dumazet <edumazet@google.com> Cc: Hannes Frederic Sowa <hannes@stressinduktion.org> Reported-by:
Dave Jones <davej@redhat.com> Acked-by:
Neil Horman <nhorman@tuxdriver.com> Acked-by:
Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Sven Eckelmann authored
[ Upstream commit d9124268 ] batadv_frag_insert_packet was unable to handle out-of-order packets because it dropped them directly. This is caused by the way the fragmentation lists is checked for the correct place to insert a fragmentation entry. The fragmentation code keeps the fragments in lists. The fragmentation entries are kept in descending order of sequence number. The list is traversed and each entry is compared with the new fragment. If the current entry has a smaller sequence number than the new fragment then the new one has to be inserted before the current entry. This ensures that the list is still in descending order. An out-of-order packet with a smaller sequence number than all entries in the list still has to be added to the end of the list. The used hlist has no information about the last entry in the list inside hlist_head and thus the last entry has to be calculated differently. Currently the code assumes that the iterator variable of hlist_for_each_entry can be used for this purpose after the hlist_for_each_entry finished. This is obviously wrong because the iterator variable is always NULL when the list was completely traversed. Instead the information about the last entry has to be stored in a different variable. This problem was introduced in 610bfc6b ("batman-adv: Receive fragmented packets and merge"). Signed-off-by:
Sven Eckelmann <sven@narfation.org> Signed-off-by:
Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by:
Antonio Quartulli <antonio@meshcoding.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Sasha Levin authored
[ Upstream commit 06ebb06d ] Check for cases when the caller requests 0 bytes instead of running off and dereferencing potentially invalid iovecs. Signed-off-by:
Sasha Levin <sasha.levin@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Vlad Yasevich authored
[ Upstream commit fcdfe3a7 ] When performing segmentation, the mac_len value is copied right out of the original skb. However, this value is not always set correctly (like when the packet is VLAN-tagged) and we'll end up copying a bad value. One way to demonstrate this is to configure a VM which tags packets internally and turn off VLAN acceleration on the forwarding bridge port. The packets show up corrupt like this: 16:18:24.985548 52:54:00:ab:be:25 > 52:54:00:26:ce:a3, ethertype 802.1Q (0x8100), length 1518: vlan 100, p 0, ethertype 0x05e0, 0x0000: 8cdb 1c7c 8cdb 0064 4006 b59d 0a00 6402 ...|...d@.....d. 0x0010: 0a00 6401 9e0d b441 0a5e 64ec 0330 14fa ..d....A.^d..0.. 0x0020: 29e3 01c9 f871 0000 0101 080a 000a e833)....q.........3 0x0030: 000f 8c75 6e65 7470 6572 6600 6e65 7470 ...unetperf.netp 0x0040: 6572 6600 6e65 7470 6572 6600 6e65 7470 erf.netperf.netp 0x0050: 6572 6600 6e65 7470 6572 6600 6e65 7470 erf.netperf.netp 0x0060: 6572 6600 6e65 7470 6572 6600 6e65 7470 erf.netperf.netp ... This also leads to awful throughput as GSO packets are dropped and cause retransmissions. The solution is to set the mac_len using the values already available in then new skb. We've already adjusted all of the header offset, so we might as well correctly figure out the mac_len using skb_reset_mac_len(). After this change, packets are segmented correctly and performance is restored. CC: Eric Dumazet <edumazet@google.com> Signed-off-by:
Vlad Yasevich <vyasevic@redhat.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Vlad Yasevich authored
[ Upstream commit 081e83a7 ] Macvlan devices do not initialize vlan_features. As a result, any vlan devices configured on top of macvlans perform very poorly. Initialize vlan_features based on the vlan features of the lower-level device. Signed-off-by:
Vlad Yasevich <vyasevic@redhat.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Christoph Paasch authored
[ Upstream commit 1f74e613 ] In vegas we do a multiplication of the cwnd and the rtt. This may overflow and thus their result is stored in a u64. However, we first need to cast the cwnd so that actually 64-bit arithmetic is done. Then, we need to do do_div to allow this to be used on 32-bit arches. Cc: Stephen Hemminger <stephen@networkplumber.org> Cc: Neal Cardwell <ncardwell@google.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: David Laight <David.Laight@ACULAB.COM> Cc: Doug Leith <doug.leith@nuim.ie> Fixes: 8d3a564d (tcp: tcp_vegas cong avoid fix) Signed-off-by:
Christoph Paasch <christoph.paasch@uclouvain.be> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Christoph Paasch authored
[ Upstream commit 45a07695 ] In veno we do a multiplication of the cwnd and the rtt. This may overflow and thus their result is stored in a u64. However, we first need to cast the cwnd so that actually 64-bit arithmetic is done. A first attempt at fixing 76f10177 ([TCP]: TCP Veno congestion control) was made by 15913114 (tcp: Overflow bug in Vegas), but it failed to add the required cast in tcp_veno_cong_avoid(). Fixes: 76f10177 ([TCP]: TCP Veno congestion control) Signed-off-by:
Christoph Paasch <christoph.paasch@uclouvain.be> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Eric Dumazet authored
[ Upstream commit 04ca6973 ] In "Counting Packets Sent Between Arbitrary Internet Hosts", Jeffrey and Jedidiah describe ways exploiting linux IP identifier generation to infer whether two machines are exchanging packets. With commit 73f156a6 ("inetpeer: get rid of ip_id_count"), we changed IP id generation, but this does not really prevent this side-channel technique. This patch adds a random amount of perturbation so that IP identifiers for a given destination [1] are no longer monotonically increasing after an idle period. Note that prandom_u32_max(1) returns 0, so if generator is used at most once per jiffy, this patch inserts no hole in the ID suite and do not increase collision probability. This is jiffies based, so in the worst case (HZ=1000), the id can rollover after ~65 seconds of idle time, which should be fine. We also change the hash used in __ip_select_ident() to not only hash on daddr, but also saddr and protocol, so that ICMP probes can not be used to infer information for other protocols. For IPv6, adds saddr into the hash as well, but not nexthdr. If I ping the patched target, we can see ID are now hard to predict. 21:57:11.008086 IP (...) A > target: ICMP echo request, seq 1, length 64 21:57:11.010752 IP (... id 2081 ...) target > A: ICMP echo reply, seq 1, length 64 21:57:12.013133 IP (...) A > target: ICMP echo request, seq 2, length 64 21:57:12.015737 IP (... id 3039 ...) target > A: ICMP echo reply, seq 2, length 64 21:57:13.016580 IP (...) A > target: ICMP echo request, seq 3, length 64 21:57:13.019251 IP (... id 3437 ...) target > A: ICMP echo reply, seq 3, length 64 [1] TCP sessions uses a per flow ID generator not changed by this patch. Signed-off-by:
Eric Dumazet <edumazet@google.com> Reported-by:
Jeffrey Knockel <jeffk@cs.unm.edu> Reported-by:
Jedidiah R. Crandall <crandall@cs.unm.edu> Cc: Willy Tarreau <w@1wt.eu> Cc: Hannes Frederic Sowa <hannes@redhat.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Eric Dumazet authored
[ Upstream commit 73f156a6 ] Ideally, we would need to generate IP ID using a per destination IP generator. linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery. 1) each inet_peer struct consumes 192 bytes 2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load. 3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20. 4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id()) 5) We garbage collect inet_peer aggressively. IP ID generation do not have to be 'perfect' Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID. We simply use an array of generators, and a Jenkin hash using the dst IP as a key. ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file) secure_ip_id() and secure_ipv6_id() no longer are needed. Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments. Signed-off-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Dmitry Kravkov authored
[ Upstream commit fe26566d ] When TSO packet is transmitted additional BD w/o mapping is used to describe the packed. The BD needs special handling in tx completion. kernel: Call Trace: kernel: <IRQ> [<ffffffff815e19ba>] dump_stack+0x19/0x1b kernel: [<ffffffff8105dee1>] warn_slowpath_common+0x61/0x80 kernel: [<ffffffff8105df5c>] warn_slowpath_fmt+0x5c/0x80 kernel: [<ffffffff814a8c0d>] ? find_iova+0x4d/0x90 kernel: [<ffffffff814ab0e2>] intel_unmap_page.part.36+0x142/0x160 kernel: [<ffffffff814ad0e6>] intel_unmap_page+0x26/0x30 kernel: [<ffffffffa01f55d7>] bnx2x_free_tx_pkt+0x157/0x2b0 [bnx2x] kernel: [<ffffffffa01f8dac>] bnx2x_tx_int+0xac/0x220 [bnx2x] kernel: [<ffffffff8101a0d9>] ? read_tsc+0x9/0x20 kernel: [<ffffffffa01f8fdb>] bnx2x_poll+0xbb/0x3c0 [bnx2x] kernel: [<ffffffff814d041a>] net_rx_action+0x15a/0x250 kernel: [<ffffffff81067047>] __do_softirq+0xf7/0x290 kernel: [<ffffffff815f3a5c>] call_softirq+0x1c/0x30 kernel: [<ffffffff81014d25>] do_softirq+0x55/0x90 kernel: [<ffffffff810673e5>] irq_exit+0x115/0x120 kernel: [<ffffffff815f4358>] do_IRQ+0x58/0xf0 kernel: [<ffffffff815e94ad>] common_interrupt+0x6d/0x6d kernel: <EOI> [<ffffffff810bbff7>] ? clockevents_notify+0x127/0x140 kernel: [<ffffffff814834df>] ? cpuidle_enter_state+0x4f/0xc0 kernel: [<ffffffff81483615>] cpuidle_idle_call+0xc5/0x200 kernel: [<ffffffff8101bc7e>] arch_cpu_idle+0xe/0x30 kernel: [<ffffffff810b4725>] cpu_startup_entry+0xf5/0x290 kernel: [<ffffffff815cfee1>] start_secondary+0x265/0x27b kernel: ---[ end trace 11aa7726f18d7e80 ]--- Fixes: a848ade4 ("bnx2x: add CSUM and TSO support for encapsulation protocols") Reported-by:
Yulong Pei <ypei@redhat.com> Cc: Michal Schmidt <mschmidt@redhat.com> Signed-off-by:
Dmitry Kravkov <Dmitry.Kravkov@qlogic.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-