- 19 Jun, 2013 7 commits
-
-
Emmanuel Grumbach authored
commit a8778369 upstream. In 63b77bf4 iwlwifi: dvm: don't send zeroed LQ cmd I tried to avoid to send zeroed LQ cmd, but I made a (very) stupid mistake in the memcmp. Since this patch has been ported to stable, the fix should go to stable too. This fixes https://bugzilla.kernel.org/show_bug.cgi?id=58341Reported-by: Hinnerk van Bruinehsen <h.v.bruinehsen@fu-berlin.de> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> [bwh: Backported to 3.2: adjust filename] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Johannes Berg authored
commit c8aa22db upstream. Since Eric's commit efe117ab ("Speedup ieee80211_remove_interfaces") there's a bug in mac80211 when it unregisters with AP_VLAN interfaces up. If the AP_VLAN interface was registered after the AP it belongs to (which is the typical case) and then we get into this code path, unregister_netdevice_many() will crash because it isn't prepared to deal with interfaces being closed in the middle of it. Exactly this happens though, because we iterate the list, find the AP master this AP_VLAN belongs to and dev_close() the dependent VLANs. After this, unregister_netdevice_many() won't pick up the fact that the AP_VLAN is already down and will do it again, causing a crash. Cc: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Jeff Layton authored
commit 166faf21 upstream. Consider the case where we have a very short ip= string in the original mount options, and when we chase a referral we end up with a very long IPv6 address. Be sure to allow for that possibility when estimating the size of the string to allocate. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Johannes Berg authored
commit c8157976 upstream. If a P2P-Device is present and another virtual interface triggers the connection work, the system crash because it tries to check if the P2P-Device's netdev (which doesn't exist) is up. Skip any wdevs that have no netdev to fix this. Reported-by: YanBo <dreamfly281@gmail.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Vincent Pelletier authored
commit e771451c upstream. libata honors DMADIR for regular commands, but not for internal commands used (among other) during device initialisation. This makes SATA-host-to-PATA-device bridges based on Silicon Image SiL3611 (such as "Abit Serillel 2") end up disabled when used with an ATAPI device after a few tries. Log output of the bridge being hot-plugged with an ATAPI drive: [ 9631.212901] ata1: exception Emask 0x10 SAct 0x0 SErr 0x40c0000 action 0xe frozen [ 9631.212913] ata1: irq_stat 0x00000040, connection status changed [ 9631.212923] ata1: SError: { CommWake 10B8B DevExch } [ 9631.212939] ata1: hard resetting link [ 9632.104962] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) [ 9632.106393] ata1.00: ATAPI: PIONEER DVD-RW DVR-115, 1.06, max UDMA/33 [ 9632.106407] ata1.00: applying bridge limits [ 9632.108151] ata1.00: configured for UDMA/33 [ 9637.105303] ata1.00: qc timeout (cmd 0xa0) [ 9637.105324] ata1.00: failed to clear UNIT ATTENTION (err_mask=0x5) [ 9637.105335] ata1: hard resetting link [ 9638.044599] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) [ 9638.047878] ata1.00: configured for UDMA/33 [ 9643.044933] ata1.00: qc timeout (cmd 0xa0) [ 9643.044953] ata1.00: failed to clear UNIT ATTENTION (err_mask=0x5) [ 9643.044963] ata1: limiting SATA link speed to 1.5 Gbps [ 9643.044971] ata1.00: limiting speed to UDMA/33:PIO3 [ 9643.044979] ata1: hard resetting link [ 9643.984225] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) [ 9643.987471] ata1.00: configured for UDMA/33 [ 9648.984591] ata1.00: qc timeout (cmd 0xa0) [ 9648.984612] ata1.00: failed to clear UNIT ATTENTION (err_mask=0x5) [ 9648.984619] ata1.00: disabled [ 9649.000593] ata1: hard resetting link [ 9649.939902] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) [ 9649.955864] ata1: EH complete With this patch, the drive enumerates correctly when libata is loaded with atapi_dmadir=1: [ 9891.810863] ata1: exception Emask 0x10 SAct 0x0 SErr 0x40c0000 action 0xe frozen [ 9891.810874] ata1: irq_stat 0x00000040, connection status changed [ 9891.810884] ata1: SError: { CommWake 10B8B DevExch } [ 9891.810900] ata1: hard resetting link [ 9892.762105] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) [ 9892.763544] ata1.00: ATAPI: PIONEER DVD-RW DVR-115, 1.06, max UDMA/33, DMADIR [ 9892.763558] ata1.00: applying bridge limits [ 9892.765393] ata1.00: configured for UDMA/33 [ 9892.786063] ata1: EH complete [ 9892.792062] scsi 0:0:0:0: CD-ROM PIONEER DVD-RW DVR-115 1.06 PQ: 0 ANSI: 5 [ 9892.798455] sr2: scsi3-mmc drive: 12x/12x writer dvd-ram cd/rw xa/form2 cdda tray [ 9892.798837] sr 0:0:0:0: Attached scsi CD-ROM sr2 [ 9892.799109] sr 0:0:0:0: Attached scsi generic sg6 type 5 Based on a patch by Csaba Halász <csaba.halasz@gmail.com> on linux-ide: http://marc.info/?l=linux-ide&m=136121147832295&w=2 tj: minor formatting changes. Signed-off-by: Vincent Pelletier <plr.vincent@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Chew, Chiau Ee authored
commit fca8c90d upstream. Adds IDE-mode SATA Device IDs for the Intel BayTrail platform. Signed-off-by: Chew, Chiau Ee <chiau.ee.chew@intel.com> Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Ben Hutchings authored
Commit 1619f441 'rapidio/tsi721: fix bug in MSI interrupt handling' (commit 1ccc819d upstream) makes the MSI handler disable and re-enable interrupts. When re-enabling interrupts, we should set the same flags as were originally set, but this changed in Linux 3.5 so the flags are now inconsistent in 3.2. In fact, the extra flag isn't even defined in 3.2. Remove the extra flag from the MSI handler. Reported-by: Steve Conklin <steve.conklin@canonical.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
- 30 May, 2013 33 commits
-
-
Ben Hutchings authored
-
Richard Weinberger authored
commit 4d94d6d0 upstream. At some places io_remap_pfn_range() is needed. UML has to serve it like all other archs do. Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Ian Abbott authored
commit 7d3135af upstream. When a low-level comedi driver auto-configures a device, a `struct comedi_dev_file_info` is allocated (as well as a `struct comedi_device`) by `comedi_alloc_board_minor()`. A pointer to the hardware `struct device` is stored as a cookie in the `struct comedi_dev_file_info`. When the low-level comedi driver auto-unconfigures the device, `comedi_auto_unconfig()` uses the cookie to find the `struct comedi_dev_file_info` so it can detach the comedi device from the driver, clean it up and free it. A problem arises if the user manually unconfigures and reconfigures the comedi device using the `COMEDI_DEVCONFIG` ioctl so that is no longer associated with the original hardware device. The problem is that the cookie is not cleared, so that a call to `comedi_auto_unconfig()` from the low-level driver will still find it, detach it, clean it up and free it. Stop this problem occurring by always clearing the `hardware_device` cookie in the `struct comedi_dev_file_info` whenever the `COMEDI_DEVCONFIG` ioctl call is successful. Signed-off-by: Ian Abbott <abbotti@mev.co.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Alan Cox authored
commit e1d45ae1 upstream. If we set mantis->fe to NULL on an error its not a good idea to then try passing NULL to the unregister paths and oopsing really. Resolves-bug: https://bugzilla.kernel.org/show_bug.cgi?id=16473Signed-off-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Wei Yongjun authored
commit 35623715 upstream. Fix to return -ENODEV in the chip not found error handling case instead of 0, as done elsewhere in this function. Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn> Cc: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Cong Wang authored
[ Upstream commit 84c4a9df ] We forget to call dev_put() on error path in xfrm6_fill_dst(), its caller doesn't handle this. Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Cong Wang <amwang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Eric Dumazet authored
[ Upstream commit f77d6021 ] We have seen multiple NULL dereferences in __inet6_lookup_established() After analysis, I found that inet6_sk() could be NULL while the check for sk_family == AF_INET6 was true. Bug was added in linux-2.6.29 when RCU lookups were introduced in UDP and TCP stacks. Once an IPv6 socket, using SLAB_DESTROY_BY_RCU is inserted in a hash table, we no longer can clear pinet6 field. This patch extends logic used in commit fcbdf09d ("net: fix nulls list corruptions in sk_prot_alloc") TCP/UDP/UDPLite IPv6 protocols provide their own .clear_sk() method to make sure we do not clear pinet6 field. At socket clone phase, we do not really care, as cloning the parent (non NULL) pinet6 is not adding a fatal race. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Jiri Pirko authored
[ Upstream commit 233c7df0, note that I had to add list_first_or_null_rcu to rculist.h in order to accomodate this fix. ] Currently, if macvlan in passthru mode is created and data are rxed and you remove this device, following panic happens: NULL pointer dereference at 0000000000000198 IP: [<ffffffffa0196058>] macvlan_handle_frame+0x153/0x1f7 [macvlan] I'm using following script to trigger this: <script> while [ 1 ] do ip link add link e1 name macvtap0 type macvtap mode passthru ip link set e1 up ip link set macvtap0 up IFINDEX=`ip link |grep macvtap0 | cut -f 1 -d ':'` cat /dev/tap$IFINDEX >/dev/null & ip link del dev macvtap0 done </script> I run this script while "ping -f" is running on another machine to send packets to e1 rx. Reason of the panic is that list_first_entry() is blindly called in macvlan_handle_frame() even if the list was empty. vlan is set to incorrect pointer which leads to the crash. I'm fixing this by protecting port->vlans list by rcu and by preventing from getting incorrect pointer in case the list is empty. Introduced by: commit eb06acdc "macvlan: Introduce 'passthru' mode to takeover the underlying device" Signed-off-by: Jiri Pirko <jiri@resnulli.us> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Josh Boyer authored
[ Upstream commit 4f924b2a ] Protect the SIOCGCM* ioctl macros with parenthesis. Reported-by: Paul Wouters <pwouters@redhat.com> Signed-off-by: Josh Boyer <jwboyer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Sergei Shtylyov authored
[ Upstream commit 4b264a16 ] The driver wrongly claimed I/O ports at an address returned by pci_iomap() -- even if it was passed an MMIO address. Fix this by claiming/releasing all PCI resources in the PCI driver's probe()/remove() methods instead and get rid of 'must_free_region' flag weirdness (why would Cardbus claim anything for us?). Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Sergei Shtylyov authored
[ Upstream commit c81400be ] When unloading the driver that drives an EISA board, a message similar to the following one is displayed: Trying to free nonexistent resource <0000000000013000-000000000001301f> Then an user is unable to reload the driver because the resource it requested in the previous load hasn't been freed. This happens most probably due to a typo in vortex_eisa_remove() which calls release_region() with 'dev->base_addr' instead of 'edev->base_addr'... Reported-by: Matthew Whitehead <tedheadster@gmail.com> Tested-by: Matthew Whitehead <tedheadster@gmail.com> Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Daniel Borkmann authored
[ Upstream commit 8da3056c ] Jakub reported that it is fairly easy to trigger the BUG() macro from user space with TPACKET_V3's RX_RING by just giving a wrong header status flag. We already had a similar situation in commit 7f5c3e3a (``af_packet: remove BUG statement in tpacket_destruct_skb'') where this was the case in the TX_RING side that could be triggered from user space. So really, don't use BUG() or BUG_ON() unless there's really no way out, and i.e. don't use it for consistency checking when there's user space involved, no excuses, especially not if you're slapping the user with WARN + dump_stack + BUG all at once. The two functions are of concern: prb_retire_current_block() [when block status != TP_STATUS_KERNEL] prb_open_block() [when block_status != TP_STATUS_KERNEL] Calls to prb_open_block() are guarded by ealier checks if block_status is really TP_STATUS_KERNEL (racy!), but the first one BUG() is easily triggable from user space. System behaves still stable after they are removed. Also remove that yoda condition entirely, since it's already guarded. Reported-by: Jakub Zawadzki <darkjames-ws@darkjames.pl> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
stephen hemminger authored
[ Upstream commit 83401eb4 ] A bridge should only send topology change notice if it is not the root bridge. It is possible for message age timer to elect itself as a new root bridge, and still have a topology change timer running but waiting for bridge lock on other CPU. Solve the race by checking if we are root bridge before continuing. This was the root cause of the cases where br_send_tcn_bpdu would OOPS. Reported-by: JerryKang <jerry.kang@samsung.com> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Jamal Hadi Salim authored
[ Upstream commit 0dcffd09 ] Deal with changes in newer xtables while maintaining backward compatibility. Thanks to Jan Engelhardt for suggestions. Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Matthew Whitehead authored
[ Upstream commit 3b54912f ] The venerable 3c509 driver only sets its device parent in one case, the ISAPnP one. It does this with the SET_NETDEV_DEV function. It should register with the device hierarchy in two additional cases: standard (non-PnP) ISA and EISA. - Currently they appear here: /sys/devices/virtual/net/eth0 (standard ISA) /sys/devices/virtual/net/eth1 (EISA) - Rather, they should instead be here: /sys/devices/isa/3c509.0/net/eth0 (standard ISA) /sys/devices/pci0000:00/0000:00:07.0/00:04/net/eth1 (EISA) Tested on ISA and EISA boards. Signed-off-by: Matthew Whitehead <tedheadster@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Eric Dumazet authored
[ Upstream commit 09316255 ] Before escaping RCU protected section and adding packet into prequeue, make sure the dst is refcounted. Reported-by: Mike Galbraith <bitbucket@online.de> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Richard Weinberger authored
commit 8c58bf3e upstream. Using this parameter one can disable the storage_size/2 check if he is really sure that the UEFI does sane gc and fulfills the spec. This parameter is useful if a devices uses more than 50% of the storage by default. The Intel DQSW67 desktop board is such a sucker for exmaple. Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Matt Fleming <matt.fleming@intel.com> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Richard Weinberger authored
commit 7791c842 upstream. Some EFI implementations return always a MaximumVariableSize of 0, check against max_size only if it is non-zero. My Intel DQ67SW desktop board has such an implementation. Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Sergey Vlasov authored
commit 3668011d upstream. Fixes build with CONFIG_EFI_VARS=m which was broken after the commit "x86, efivars: firmware bug workarounds should be in platform code". Signed-off-by: Sergey Vlasov <vsu@altlinux.ru> Signed-off-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Matt Fleming authored
commit a6e4d5a0 upstream. Let's not burden ia64 with checks in the common efivars code that we're not writing too much data to the variable store. That kind of thing is an x86 firmware bug, plain and simple. efi_query_variable_store() provides platforms with a wrapper in which they can perform checks and workarounds for EFI variable storage bugs. Cc: H. Peter Anvin <hpa@zytor.com> Cc: Matthew Garrett <mjg59@srcf.ucam.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Philipp Reisner authored
commit 7c689e63 upstream. With an automatic after split-brain recovery policy of "after-sb-1pri call-pri-lost-after-sb", when trying to drbd_set_role() to R_SECONDARY, we run into a deadlock. This was first recognized and supposedly fixed by 2009-06-10 "Fixed a deadlock when using automatic split brain recovery when both nodes are" replacing drbd_set_role() with drbd_change_state() in that code-path, but the first hunk of that patch forgets to remove the drbd_set_role(). We apparently only ever tested the "two primaries" case. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Tomoya MORINAGA authored
commit 5c1ef591 upstream. pdc_desc_get() is called from pd_prep_slave_sg, and the function is called from interrupt context(e.g. Uart driver "pch_uart.c"). In fact, I saw kernel error message. So, GFP_ATOMIC must be used not GFP_NOIO. Signed-off-by: Tomoya MORINAGA <tomoya.rohm@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Hans Schillstrom authored
commit f7a1dd6e upstream. The reason for this patch is crash in kmemdup caused by returning from get_callid with uniialized matchoff and matchlen. Removing Zero check of matchlen since it's done by ct_sip_get_header() BUG: unable to handle kernel paging request at ffff880457b5763f IP: [<ffffffff810df7fc>] kmemdup+0x2e/0x35 PGD 27f6067 PUD 0 Oops: 0000 [#1] PREEMPT SMP Modules linked in: xt_state xt_helper nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_mangle xt_connmark xt_conntrack ip6_tables nf_conntrack_ftp ip_vs_ftp nf_nat xt_tcpudp iptable_mangle xt_mark ip_tables x_tables ip_vs_rr ip_vs_lblcr ip_vs_pe_sip ip_vs nf_conntrack_sip nf_conntrack bonding igb i2c_algo_bit i2c_core CPU 5 Pid: 0, comm: swapper/5 Not tainted 3.9.0-rc5+ #5 /S1200KP RIP: 0010:[<ffffffff810df7fc>] [<ffffffff810df7fc>] kmemdup+0x2e/0x35 RSP: 0018:ffff8803fea03648 EFLAGS: 00010282 RAX: ffff8803d61063e0 RBX: 0000000000000003 RCX: 0000000000000003 RDX: 0000000000000003 RSI: ffff880457b5763f RDI: ffff8803d61063e0 RBP: ffff8803fea03658 R08: 0000000000000008 R09: 0000000000000011 R10: 0000000000000011 R11: 00ffffffff81a8a3 R12: ffff880457b5763f R13: ffff8803d67f786a R14: ffff8803fea03730 R15: ffffffffa0098e90 FS: 0000000000000000(0000) GS:ffff8803fea00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: ffff880457b5763f CR3: 0000000001a0c000 CR4: 00000000001407e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Process swapper/5 (pid: 0, threadinfo ffff8803ee18c000, task ffff8803ee18a480) Stack: ffff8803d822a080 000000000000001c ffff8803fea036c8 ffffffffa000937a ffffffff81f0d8a0 000000038135fdd5 ffff880300000014 ffff880300110000 ffffffff150118ac ffff8803d7e8a000 ffff88031e0118ac 0000000000000000 Call Trace: <IRQ> [<ffffffffa000937a>] ip_vs_sip_fill_param+0x13a/0x187 [ip_vs_pe_sip] [<ffffffffa007b209>] ip_vs_sched_persist+0x2c6/0x9c3 [ip_vs] [<ffffffff8107dc53>] ? __lock_acquire+0x677/0x1697 [<ffffffff8100972e>] ? native_sched_clock+0x3c/0x7d [<ffffffff8100972e>] ? native_sched_clock+0x3c/0x7d [<ffffffff810649bc>] ? sched_clock_cpu+0x43/0xcf [<ffffffffa007bb1e>] ip_vs_schedule+0x181/0x4ba [ip_vs] ... Signed-off-by: Hans Schillstrom <hans@schillstrom.com> Acked-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Simon Horman <horms@verge.net.au> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
libin authored
commit fd9b86d3 upstream. Commit 201c373e ("sched/debug: Limit sd->*_idx range on sysctl") was an incomplete bug fix. This patch fixes sd->*_idx limit range to [0 ~ CPU_LOAD_IDX_MAX-1] avoiding array overflow caused by setting sd->*_idx to CPU_LOAD_IDX_MAX on sysctl. Signed-off-by: Libin <huawei.libin@huawei.com> Cc: <jiang.liu@huawei.com> Cc: <guohanjun@huawei.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/51626610.2040607@huawei.comSigned-off-by: Ingo Molnar <mingo@kernel.org> [bwh: Backported to 3.2: adjust filename] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Namhyung Kim authored
commit 201c373e upstream. Various sd->*_idx's are used for refering the rq's load average table when selecting a cpu to run. However they can be set to any number with sysctl knobs so that it can crash the kernel if something bad is given. Fix it by limiting them into the actual range. Signed-off-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1345104204-8317-1-git-send-email-namhyung@kernel.orgSigned-off-by: Ingo Molnar <mingo@kernel.org> [bwh: Backported to 3.2: - Adjust filename - s/umode_t/mode_t/] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Sarah Sharp authored
commit a83d6755 upstream. When a device attached to the roothub is suspended, the endpoint rings are stopped. The host may generate a completion event with the completion code set to 'Stopped' or 'Stopped Invalid' when the ring is halted. The current xHCI code prints a warning in that case, which can be really annoying if the USB device is coming into and out of suspend. Remove the unnecessary warning. Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com> Tested-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Cliff Wickman authored
commit a9ff785e upstream. A panic can be caused by simply cat'ing /proc/<pid>/smaps while an application has a VM_PFNMAP range. It happened in-house when a benchmarker was trying to decipher the memory layout of his program. /proc/<pid>/smaps and similar walks through a user page table should not be looking at VM_PFNMAP areas. Certain tests in walk_page_range() (specifically split_huge_page_pmd()) assume that all the mapped PFN's are backed with page structures. And this is not usually true for VM_PFNMAP areas. This can result in panics on kernel page faults when attempting to address those page structures. There are a half dozen callers of walk_page_range() that walk through a task's entire page table (as N. Horiguchi pointed out). So rather than change all of them, this patch changes just walk_page_range() to ignore VM_PFNMAP areas. The logic of hugetlb_vma() is moved back into walk_page_range(), as we want to test any vma in the range. VM_PFNMAP areas are used by: - graphics memory manager gpu/drm/drm_gem.c - global reference unit sgi-gru/grufile.c - sgi special memory char/mspec.c - and probably several out-of-tree modules [akpm@linux-foundation.org: remove now-unused hugetlb_vma() stub] Signed-off-by: Cliff Wickman <cpw@sgi.com> Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: David Sterba <dsterba@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Joseph Qi authored
commit b4ca2b4b upstream. Last time we found there is lock/unlock bug in ocfs2_file_aio_write, and then we did a thorough search for all lock resources in ocfs2_inode_info, including rw, inode and open lockres and found this bug. My kernel version is 3.0.13, and it is also in the lastest version 3.9. In ocfs2_fiemap, once ocfs2_get_clusters_nocache failed, it should goto out_unlock instead of out, because we need release buffer head, up read alloc sem and unlock inode. Signed-off-by: Joseph Qi <joseph.qi@huawei.com> Reviewed-by: Jie Liu <jeff.liu@oracle.com> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Acked-by: Sunil Mushran <sunil.mushran@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Jiri Kosina authored
commit 10b3a32d upstream. Commit 902c098a ("random: use lockless techniques in the interrupt path") turned IRQ path from being spinlock protected into lockless cmpxchg-retry update. That commit removed r->lock serialization between crediting entropy bits from IRQ context and accounting when extracting entropy on userspace read path, but didn't turn the r->entropy_count reads/updates in account() to use cmpxchg as well. It has been observed, that under certain circumstances this leads to read() on /dev/urandom to return 0 (EOF), as r->entropy_count gets corrupted and becomes negative, which in turn results in propagating 0 all the way from account() to the actual read() call. Convert the accounting code to be the proper lockless counterpart of what has been partially done by 902c098a. Signed-off-by: Jiri Kosina <jkosina@suse.cz> Cc: Theodore Ts'o <tytso@mit.edu> Cc: Greg KH <greg@kroah.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Ryusuke Konishi authored
commit 136e8770 upstream. nilfs2: fix issue of nilfs_set_page_dirty for page at EOF boundary DESCRIPTION: There are use-cases when NILFS2 file system (formatted with block size lesser than 4 KB) can be remounted in RO mode because of encountering of "broken bmap" issue. The issue was reported by Anthony Doggett <Anthony2486@interfaces.org.uk>: "The machine I've been trialling nilfs on is running Debian Testing, Linux version 3.2.0-4-686-pae (debian-kernel@lists.debian.org) (gcc version 4.6.3 (Debian 4.6.3-14) ) #1 SMP Debian 3.2.35-2), but I've also reproduced it (identically) with Debian Unstable amd64 and Debian Experimental (using the 3.8-trunk kernel). The problematic partitions were formatted with "mkfs.nilfs2 -b 1024 -B 8192"." SYMPTOMS: (1) System log contains error messages likewise: [63102.496756] nilfs_direct_assign: invalid pointer: 0 [63102.496786] NILFS error (device dm-17): nilfs_bmap_assign: broken bmap (inode number=28) [63102.496798] [63102.524403] Remounting filesystem read-only (2) The NILFS2 file system is remounted in RO mode. REPRODUSING PATH: (1) Create volume group with name "unencrypted" by means of vgcreate utility. (2) Run script (prepared by Anthony Doggett <Anthony2486@interfaces.org.uk>): ----------------[BEGIN SCRIPT]-------------------- VG=unencrypted lvcreate --size 2G --name ntest $VG mkfs.nilfs2 -b 1024 -B 8192 /dev/mapper/$VG-ntest mkdir /var/tmp/n mkdir /var/tmp/n/ntest mount /dev/mapper/$VG-ntest /var/tmp/n/ntest mkdir /var/tmp/n/ntest/thedir cd /var/tmp/n/ntest/thedir sleep 2 date darcs init sleep 2 dmesg|tail -n 5 date darcs whatsnew || true date sleep 2 dmesg|tail -n 5 ----------------[END SCRIPT]-------------------- REPRODUCIBILITY: 100% INVESTIGATION: As it was discovered, the issue takes place during segment construction after executing such sequence of user-space operations: open("_darcs/index", O_RDWR|O_CREAT|O_NOCTTY, 0666) = 7 fstat(7, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0 ftruncate(7, 60) The error message "NILFS error (device dm-17): nilfs_bmap_assign: broken bmap (inode number=28)" takes place because of trying to get block number for third block of the file with logical offset #3072 bytes. As it is possible to see from above output, the file has 60 bytes of the whole size. So, it is enough one block (1 KB in size) allocation for the whole file. Trying to operate with several blocks instead of one takes place because of discovering several dirty buffers for this file in nilfs_segctor_scan_file() method. The root cause of this issue is in nilfs_set_page_dirty function which is called just before writing to an mmapped page. When nilfs_page_mkwrite function handles a page at EOF boundary, it fills hole blocks only inside EOF through __block_page_mkwrite(). The __block_page_mkwrite() function calls set_page_dirty() after filling hole blocks, thus nilfs_set_page_dirty function (= a_ops->set_page_dirty) is called. However, the current implementation of nilfs_set_page_dirty() wrongly marks all buffers dirty even for page at EOF boundary. As a result, buffers outside EOF are inconsistently marked dirty and queued for write even though they are not mapped with nilfs_get_block function. FIX: This modifies nilfs_set_page_dirty() not to mark hole blocks dirty. Thanks to Vyacheslav Dubeyko for his effort on analysis and proposals for this issue. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Reported-by: Anthony Doggett <Anthony2486@interfaces.org.uk> Reported-by: Vyacheslav Dubeyko <slava@dubeyko.com> Cc: Vyacheslav Dubeyko <slava@dubeyko.com> Tested-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Brian Behlendorf authored
commit dfd20b2b upstream. The index on the page must be set before it is inserted in the radix tree. Otherwise there is a small race which can occur during lookup where the page can be found with the incorrect index. This will trigger the BUG_ON() in brd_lookup_page(). Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Reported-by: Chris Wedgwood <cw@f00f.org> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Aneesh Kumar K.V authored
commit 7c342512 upstream. We should not use set_pmd_at to update pmd_t with pgtable_t pointer. set_pmd_at is used to set pmd with huge pte entries and architectures like ppc64, clear few flags from the pte when saving a new entry. Without this change we observe bad pte errors like below on ppc64 with THP enabled. BUG: Bad page map in process ld mm=0xc000001ee39f4780 pte:7fc3f37848000001 pmd:c000001ec0000000 Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Hugh Dickins <hughd@google.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-
Leonid Yegoshin authored
commit c2cc499c upstream. Page 'new' during MIGRATION can't be flushed with flush_cache_page(). Using flush_cache_page(vma, addr, pfn) is justified only if the page is already placed in process page table, and that is done right after flush_cache_page(). But without it the arch function has no knowledge of process PTE and does nothing. Besides that, flush_cache_page() flushes an application cache page, but the kernel has a different page virtual address and dirtied it. Replace it with flush_dcache_page(new) which is the proper usage. The old page is flushed in try_to_unmap_one() before migration. This bug takes place in Sead3 board with M14Kc MIPS CPU without cache aliasing (but Harvard arch - separate I and D cache) in tight memory environment (128MB) each 1-3days on SOAK test. It fails in cc1 during kernel build (SIGILL, SIGBUS, SIGSEG) if CONFIG_COMPACTION is switched ON. Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Cc: Leonid Yegoshin <yegoshin@mips.com> Acked-by: Rik van Riel <riel@redhat.com> Cc: Michal Hocko <mhocko@suse.cz> Acked-by: Mel Gorman <mgorman@suse.de> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Russell King <rmk@arm.linux.org.uk> Cc: David Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-