- 26 Aug, 2014 40 commits
-
-
Vitaliy Kulikov authored
commit d009f3de upstream. Adds linear EQ filtering for integrated speaker protection Signed-off-by: Vitaliy Kulikov <vitaliy.kulikov@idt.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Benjamin Tisssoires authored
commit 42c22dbf upstream. This fix (not very clean though) should fix the long time USB3 issue that was spotted last year. The rational has been given by Hans de Goede: ---- I think the most likely cause for this is a firmware bug in the unifying receiver, likely a race condition. The most prominent difference between having a USB-2 device plugged into an EHCI (so USB-2 only) port versus an XHCI port will be inter packet timing. Specifically if you send packets (ie hid reports) one at a time, then with the EHCI controller their will be a significant pause between them, where with XHCI they will be very close together in time. The reason for this is the difference in EHCI / XHCI controller OS <-> driver interfaces. For non periodic endpoints (control, bulk) the EHCI uses a circular linked-list of commands in dma-memory, which it follows to execute commands, if the list is empty, it will go into an idle state and re-check periodically. The XHCI uses a ring of commands per endpoint, and if the OS places anything new on the ring it will do an ioport write, waking up the XHCI making it send the new packet immediately. For periodic transfers (isoc, interrupt) the delay between packets when sending one at a time (rather then queuing them up) will be even larger, because they need to be inserted into the EHCI schedule 2 ms in the future so the OS driver can be sure that the EHCI driver does not try to start executing the time slot in question before the insertion has completed. So a possible fix may be to insert a delay between packets being send to the receiver. ---- I tested this on a buggy Haswell USB 3.0 motherboard, and I always get the notification after adding the msleep. Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Jiri Kosina authored
commit 8c947e20 upstream. Acer Aspire needs to be added to nomux blacklist, otherwise the touchpad misbehaves rather randomly. Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Laurent Dufour authored
commit 761ce533 upstream. Numerical values stored in the device tree are encoded in Big Endian and should be byte swapped when running in Little Endian. The RPA hotplug module should convert those values as well. Note that in rpaphp_get_drc_props(), the comparison between indexes[i+1] and *index is done using the BE values (whatever is the current endianess). This doesn't matter since we are checking for equality here. This way only the returned value is byte swapped. RPA also made RTAS calls which implies BE values to be used. According to the patch done in RTAS (http://patchwork.ozlabs.org/patch/336865), no additional conversion is required in RPA. Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Ying Xue authored
commit 5c0a0fc8 upstream. tipc_msg_build() calls skb_copy_to_linear_data_offset() to copy data from user space to kernel space. However, the latter function does in its turn call memcpy() to perform the actual copying. This poses an obvious security and robustness risk, since memcpy() never makes any validity check on the pointer it is copying from. To correct this, we the replace the offending function call with a call to memcpy_fromiovecend(), which uses copy_from_user() to perform the copying. Signed-off-by: Ying Xue <ying.xue@windriver.com> Reviewed-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Nithin Sujir authored
commit 68273712 upstream. This patch adds support for 57764, 57765, 57787, 57782 and 57786 devices. Signed-off-by: Nithin Nayak Sujir <nsujir@broadcom.com> Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Maurizio Lombardi authored
commit fdbcbcab upstream. In case of error, the bnx2fc_allocate_hash_table() didn't free all the memory it allocated. Signed-off-by: Maurizio Lombardi <mlombard@redhat.com> Acked-by: Eddie Wai <eddie.wai@broadcom.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Yuval Mintz authored
commit bd8e012b upstream. Since commit 3fb43eb2 ("bnx2x: Change to D3hot only on removal") nvram is accessible whenever the driver is loaded - Thus it is possible to test it during self-test even if the interface is down Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com> Signed-off-by: Ariel Elior <ariele@broadcom.com> Signed-off-by: Eilon Greenstein <eilong@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Dan Carpenter authored
commit e4514cbd upstream. The cpl_abort_req struct has several reserved members which need to be cleared to avoid disclosing kernel information. I have added a memset() so now it matches the cxgb4 version of this function. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <roland@purestorage.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
David Gibson authored
commit 4710b2ba upstream. netxen_process_lro() contains two bounds checks. One for the ring number against the number of rings, and one for the Rx buffer ID against the array of receive buffers. Both of these have off-by-one errors, using > instead of >=. The correct versions are used in netxen_process_rcv(), they're just wrong in netxen_process_lro(). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Russell King authored
commit 3e548079 upstream. The fallback to 32-bit DMA mask is rather odd: if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) && !dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64))) { *using_dac = true; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) goto release_regions; } This means we only try and set the coherent DMA mask if we failed to set a 32-bit DMA mask, and only if both fail do we fail the driver. Adjust this so that if either setting fails, we fail the driver - and thereby end up properly setting both the DMA mask and the coherent DMA mask in the fallback case. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Wei Yongjun authored
commit de524681 upstream. Add the missing iounmap() before return from igbvf_probe() in the error handling case. Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Tested-by: Sibai Li <Sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Dan Carpenter authored
commit 3de9e65f upstream. If new_mtu is very large then "new_mtu + ETH_HLEN + ETH_FCS_LEN" can wrap and the check on the next line can underflow. This is one of those bugs which can be triggered by the user if you have namespaces configured. Also since this is something the user can trigger then we don't want to have dev_err() message. This is a static checker fix and I'm not sure what the impact is. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Tested-by: Sibai Li Sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Russell King authored
commit c21b8ebc upstream. The fallback to 32-bit DMA mask is rather odd: err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) pci_using_dac = 1; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { dev_err(&pdev->dev, "No usable DMA " "configuration, aborting\n"); goto err_dma; } } } This means we only set the coherent DMA mask in the fallback path if the DMA mask set failed, which is silly. This fixes it to set the coherent DMA mask only if dma_set_mask() succeeded, and to error out if either fails. Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Akeem G Abodunrin authored
commit 42ce4126 upstream. This patch fixes Wake on LAN being reported as supported on some Ethernet ports, in contrary to Hardware capability. Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Fujinaka, Todd authored
commit a71fc313 upstream. Don't let ethtool try to write to iNVM in i210/i211. This fixes an issue seen by Marek Vasut. Reported-by: Marek Vasut <marex@denx.de> Signed-off-by: Todd Fujinaka <todd.fujinaka@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Stefan Assmann authored
commit 781798a1 upstream. commit fa44f2f1 broke reloading of igb, when VFs are assigned to a guest, in several ways. 1. on module load adapter->vf_data does not get properly allocated, resulting in a null pointer exception when accessing adapter->vf_data in igb_reset() on module reload. modprobe -r igb ; modprobe igb max_vfs=7 [ 215.215837] igb 0000:01:00.1: removed PHC on eth1 [ 216.932072] igb 0000:01:00.1: IOV Disabled [ 216.937038] igb 0000:01:00.0: removed PHC on eth0 [ 217.127032] igb 0000:01:00.0: Cannot deallocate SR-IOV virtual functions while they are assigned - VFs will not be deallocated [ 217.146178] igb: Intel(R) Gigabit Ethernet Network Driver - version 5.0.5-k [ 217.154050] igb: Copyright (c) 2007-2013 Intel Corporation. [ 217.160688] igb 0000:01:00.0: Enabling SR-IOV VFs using the module parameter is deprecated - please use the pci sysfs interface. [ 217.173703] igb 0000:01:00.0: irq 103 for MSI/MSI-X [ 217.179227] igb 0000:01:00.0: irq 104 for MSI/MSI-X [ 217.184735] igb 0000:01:00.0: irq 105 for MSI/MSI-X [ 217.220082] BUG: unable to handle kernel NULL pointer dereference at 0000000000000048 [ 217.228846] IP: [<ffffffffa007c5e5>] igb_reset+0xc5/0x4b0 [igb] [ 217.235472] PGD 3607ec067 PUD 36170b067 PMD 0 [ 217.240461] Oops: 0002 [#1] SMP [ 217.244085] Modules linked in: igb(+) igbvf mptsas mptscsih mptbase scsi_transport_sas [last unloaded: igb] [ 217.255040] CPU: 4 PID: 4833 Comm: modprobe Not tainted 3.11.0+ #46 [...] [ 217.390007] [<ffffffffa007fab2>] igb_probe+0x892/0xfd0 [igb] [ 217.396422] [<ffffffff81470b3e>] local_pci_probe+0x1e/0x40 [ 217.402641] [<ffffffff81472029>] pci_device_probe+0xf9/0x110 [...] 2. A follow up issue, pci_enable_sriov() should only be called if no VFs were still allocated on module unload. Otherwise pci_enable_sriov() gets called multiple times in a row rendering the NIC unusable until reset. 3. simply calling igb_enable_sriov() in igb_probe_vfs() is not enough as the interrupts need to be re-setup. Switching that to igb_pci_enable_sriov(). Signed-off-by: Stefan Assmann <sassmann@kpanic.de> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Tested-by: Sibai Li <Sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Carolyn Wyborny authored
commit d1c17d80 upstream. This patch calls code to set the master/slave mode for all m88 gen 2 PHY's. This patch also removes the call to this function for I210 devices only from the function that is not called by I210 devices. Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@gmail.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Fujinaka, Todd authored
commit a4e979a2 upstream. Add the ethtool offline tests for i354 devices. Signed-off-by: Todd Fujinaka <todd.fujinaka@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Russell King authored
commit dc4ff9bb upstream. The fallback to 32-bit DMA mask is rather odd: err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) pci_using_dac = 1; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { dev_err(&pdev->dev, "No usable DMA configuration, aborting\n"); goto err_dma; } } } This means we only set the coherent DMA mask in the fallback path if the DMA mask set failed, which is silly. This fixes it to set the coherent DMA mask only if dma_set_mask() succeeded, and to error out if either fails. Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Don Skidmore authored
commit c7bb417d upstream. Since we are already checking for read failure in check_link we don't need to do it here. Instead just make sure the watchdog task gets scheduled, if we are up, and it can be done there. This will better follow igbvf method of handling a mailbox event and message timeout. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com> Tested-by: Stephen Ko <stephen.s.ko@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Russell King authored
commit 53567aa4 upstream. The fallback to 32-bit DMA mask is rather odd: if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) && !dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64))) { pci_using_dac = 1; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { dev_err(&pdev->dev, "No usable DMA " "configuration, aborting\n"); goto err_dma; } } pci_using_dac = 0; } This means we only set the coherent DMA mask in the fallback path if the DMA mask set failed, which is silly. This fixes it to set the coherent DMA mask only if dma_set_mask() succeeded, and to error out if either fails. Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Emil Tantilov authored
commit cf78959c upstream. This patch resolves an issue where the MTA table can be cleared when the interface is reset while in promisc mode. As result IPv6 traffic between VFs will be interrupted. This patch makes the update of the MTA table unconditional to avoid the inconsistent clearing on reset. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Jacob Keller authored
commit 27d9ce4f upstream. ixgbe_napi_disable_all calls napi_disable on each queue, however the busy polling code introduced a local_bh_disable()d context around the napi_disable. The original author did not realize that napi_disable might sleep, which would cause a sleep while atomic BUG. In addition, on a single processor system, the ixgbe_qv_lock_napi loop shouldn't have to mdelay. This patch adds an ixgbe_qv_disable along with a new IXGBE_QV_STATE_DISABLED bit, which it uses to indicate to the poll and napi routines that the q_vector has been disabled. Now the ixgbe_napi_disable_all function will wait until all pending work has been finished and prevent any future work from being started. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Cc: Eliezer Tamir <eliezer.tamir@linux.intel.com> Cc: Alexander Duyck <alexander.duyck@intel.com> Cc: Hyong-Youb Kim <hykim@myri.com> Cc: Amir Vadai <amirv@mellanox.com> Cc: Dmitry Kravkov <dmitry@broadcom.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Emil Tantilov authored
commit 2e010381 upstream. This patch resolves an issue where the logic used to detect changes in rx-usecs was incorrect and was masked by the call to ixgbe_update_rsc(). Setting rx-usecs between 0,2-9 and 1,10 and up requires a reset to allow ixgbe_configure_tx_ring() to set the correct value for TXDCTL.WTHRESH in order to avoid Tx hangs with BQL enabled. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Russell King authored
commit f5f2eda8 upstream. The fallback to 32-bit DMA mask is rather odd: if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) && !dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64))) { pci_using_dac = 1; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { dev_err(&pdev->dev, "No usable DMA configuration, aborting\n"); goto err_dma; } } pci_using_dac = 0; } This means we only set the coherent DMA mask in the fallback path if the DMA mask set failed, which is silly. This fixes it to set the coherent DMA mask only if dma_set_mask() succeeded, and to error out if either fails. Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Vladimir Davydov authored
commit 74a1b1ea upstream. On e1000_down(), we should ensure every asynchronous work is canceled before proceeding. Since the watchdog_task can schedule other works apart from itself, it should be stopped first, but currently it is stopped after the reset_task. This can result in the following race leading to the reset_task running after the module unload: e1000_down_and_stop(): e1000_watchdog(): ---------------------- ----------------- cancel_work_sync(reset_task) schedule_work(reset_task) cancel_delayed_work_sync(watchdog_task) The patch moves cancel_delayed_work_sync(watchdog_task) at the beginning of e1000_down_and_stop() thus ensuring the race is impossible. Cc: Tushar Dave <tushar.n.dave@intel.com> Cc: Patrick McHardy <kaber@trash.net> Signed-off-by: Vladimir Davydov <vdavydov@parallels.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
yzhu1 authored
commit 6a7d64e3 upstream. This change is based on a similar change made to e1000e support in commit bb9e44d0 ("e1000e: prevent oops when adapter is being closed and reset simultaneously"). The same issue has also been observed on the older e1000 cards. Here, we have increased the RESET_COUNT value to 50 because there are too many accesses to e1000 nic on stress tests to e1000 nic, it is not enough to set RESET_COUT 25. Experimentation has shown that it is enough to set RESET_COUNT 50. Signed-off-by: yzhu1 <yanjun.zhu@windriver.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Hong Zhiguo authored
commit 49a45a06 upstream. tx_ring and adapter->tx_ring are already of type "struct e1000_tx_ring *" Signed-off-by: Hong Zhiguo <zhiguohong@tencent.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Mika Westerberg authored
commit 38a529b5 upstream. Commit 7509963c (e1000e: Fix a compile flag mis-match for suspend/resume) moved suspend and resume hooks to be available when CONFIG_PM is set. However, it can be set even if CONFIG_PM_SLEEP is not set causing following warnings to be emitted: drivers/net/ethernet/intel/e1000e/netdev.c:6178:12: warning: ‘e1000_suspend’ defined but not used [-Wunused-function] drivers/net/ethernet/intel/e1000e/netdev.c:6185:12: warning: ‘e1000_resume’ defined but not used [-Wunused-function] To fix this make the hooks to be available only when CONFIG_PM_SLEEP is set and remove CONFIG_PM wrapping from driver ops because this is already handled by SET_SYSTEM_SLEEP_PM_OPS() and SET_RUNTIME_PM_OPS(). Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Dave Ertman <davidx.m.ertman@intel.com> Cc: Aaron Brown <aaron.f.brown@intel.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
David Ertman authored
commit 7509963c upstream. This patch addresses a mis-match between the declaration and usage of the e1000_suspend and e1000_resume functions. Previously, these functions were declared in a CONFIG_PM_SLEEP wrapper, and then utilized within a CONFIG_PM wrapper. Both the declaration and usage will now be contained within CONFIG_PM wrappers. Signed-off-by: Dave Ertman <davidx.m.ertman@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Russell King authored
commit 718a39eb upstream. The fallback to 32-bit DMA mask is rather odd: err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) pci_using_dac = 1; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { dev_err(&pdev->dev, "No usable DMA configuration, aborting\n"); goto err_dma; } } } This means we only set the coherent DMA mask in the fallback path if the DMA mask set failed, which is silly. This fixes it to set the coherent DMA mask only if dma_set_mask() succeeded, and to error out if either fails. Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Russell King authored
commit 4aa806b7 upstream. Provide a helper to set both the DMA and coherent DMA masks to the same value - this avoids duplicated code in a number of drivers, sometimes with buggy error handling, and also allows us identify which drivers do things differently. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Keith Packard authored
commit 5f4dc28b upstream. When FB_EVENT_FB_UNBIND is sent, fbcon has two paths, one path taken when there is another frame buffer to switch any affected vcs to and another path when there isn't. In the case where there is another frame buffer to use, fbcon_fb_unbind calls set_con2fb_map to remap all of the affected vcs to the replacement frame buffer. set_con2fb_map will eventually call con2fb_release_oldinfo when the last vcs gets unmapped from the old frame buffer. con2fb_release_oldinfo frees the fbcon data that is hooked off of the fb_info structure, including the cursor timer. In the case where there isn't another frame buffer to use, fbcon_fb_unbind simply calls fbcon_unbind, which doesn't clear the con2fb_map or free the fbcon data hooked from the fb_info structure. In particular, it doesn't stop the cursor blink timer. When the fb_info structure is then freed, we end up with a timer queue pointing into freed memory and "bad things" start happening. This patch first changes con2fb_release_oldinfo so that it can take a NULL pointer for the new frame buffer, but still does all of the deallocation and cursor timer cleanup. Finally, the patch tries to replicate some of what set_con2fb_map does by clearing the con2fb_map for the affected vcs and calling the modified con2fb_release_info function to clean up the fb_info structure. Signed-off-by: Keith Packard <keithp@keithp.com> Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Cedric Le Goater authored
commit 212c0cbd upstream. The "screen" properties : depth, width, height, linebytes need to be converted to the host endian order when read from the device tree. The offb_init_palette_hacks() routine also made assumption on the host endian order. Signed-off-by: Cédric Le Goater <clg@fr.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Jan Kara authored
commit 77ea2a4b upstream. free_holes_block() passed local variable as a block pointer to ext4_clear_blocks(). Thus ext4_clear_blocks() zeroed out this local variable instead of proper place in inode / indirect block. We later zero out proper place in inode / indirect block but don't dirty the inode / buffer again which can lead to subtle issues (some changes e.g. to inode can be lost). Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Eric W. Biederman authored
commit 9566d674 upstream. While invesgiating the issue where in "mount --bind -oremount,ro ..." would result in later "mount --bind -oremount,rw" succeeding even if the mount started off locked I realized that there are several additional mount flags that should be locked and are not. In particular MNT_NOSUID, MNT_NODEV, MNT_NOEXEC, and the atime flags in addition to MNT_READONLY should all be locked. These flags are all per superblock, can all be changed with MS_BIND, and should not be changable if set by a more privileged user. The following additions to the current logic are added in this patch. - nosuid may not be clearable by a less privileged user. - nodev may not be clearable by a less privielged user. - noexec may not be clearable by a less privileged user. - atime flags may not be changeable by a less privileged user. The logic with atime is that always setting atime on access is a global policy and backup software and auditing software could break if atime bits are not updated (when they are configured to be updated), and serious performance degradation could result (DOS attack) if atime updates happen when they have been explicitly disabled. Therefore an unprivileged user should not be able to mess with the atime bits set by a more privileged user. The additional restrictions are implemented with the addition of MNT_LOCK_NOSUID, MNT_LOCK_NODEV, MNT_LOCK_NOEXEC, and MNT_LOCK_ATIME mnt flags. Taken together these changes and the fixes for MNT_LOCK_READONLY should make it safe for an unprivileged user to create a user namespace and to call "mount --bind -o remount,... ..." without the danger of mount flags being changed maliciously. Cc: stable@vger.kernel.org Acked-by: Serge E. Hallyn <serge.hallyn@ubuntu.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Eric W. Biederman authored
commit 07b64558 upstream. There are no races as locked mount flags are guaranteed to never change. Moving the test into do_remount makes it more visible, and ensures all filesystem remounts pass the MNT_LOCK_READONLY permission check. This second case is not an issue today as filesystem remounts are guarded by capable(CAP_DAC_ADMIN) and thus will always fail in less privileged mount namespaces, but it could become an issue in the future. Cc: stable@vger.kernel.org Acked-by: Serge E. Hallyn <serge.hallyn@ubuntu.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Eric W. Biederman authored
commit a6138db8 upstream. Kenton Varda <kenton@sandstorm.io> discovered that by remounting a read-only bind mount read-only in a user namespace the MNT_LOCK_READONLY bit would be cleared, allowing an unprivileged user to the remount a read-only mount read-write. Correct this by replacing the mask of mount flags to preserve with a mask of mount flags that may be changed, and preserve all others. This ensures that any future bugs with this mask and remount will fail in an easy to detect way where new mount flags simply won't change. Cc: stable@vger.kernel.org Acked-by: Serge E. Hallyn <serge.hallyn@ubuntu.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Naoya Horiguchi authored
commit 0253d634 upstream. Commit 4a705fef ("hugetlb: fix copy_hugetlb_page_range() to handle migration/hwpoisoned entry") changed the order of huge_ptep_set_wrprotect() and huge_ptep_get(), which leads to breakage in some workloads like hugepage-backed heap allocation via libhugetlbfs. This patch fixes it. The test program for the problem is shown below: $ cat heap.c #include <unistd.h> #include <stdlib.h> #include <string.h> #define HPS 0x200000 int main() { int i; char *p = malloc(HPS); memset(p, '1', HPS); for (i = 0; i < 5; i++) { if (!fork()) { memset(p, '2', HPS); p = malloc(HPS); memset(p, '3', HPS); free(p); return 0; } } sleep(1); free(p); return 0; } $ export HUGETLB_MORECORE=yes ; export HUGETLB_NO_PREFAULT= ; hugectl --heap ./heap Fixes 4a705fef ("hugetlb: fix copy_hugetlb_page_range() to handle migration/hwpoisoned entry"), so is applicable to -stable kernels which include it. Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reported-by: Guillaume Morin <guillaume@morinfr.org> Suggested-by: Guillaume Morin <guillaume@morinfr.org> Acked-by: Hugh Dickins <hughd@google.com> Cc: <stable@vger.kernel.org> [2.6.37+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-