- 17 Jan, 2013 40 commits
-
-
Yan, Zheng authored
(cherry picked from commit 0685235f) Add dirty inode to cap_dirty_migrating list instead, this can avoid ceph_flush_dirty_caps() entering infinite loop. Signed-off-by:
Yan, Zheng <zheng.z.yan@intel.com> Signed-off-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Yan, Zheng authored
(cherry picked from commit ed75ec2c) __wake_requests() will enter infinite loop if we use it to wake requests in the session->s_waiting list. __wake_requests() deletes requests from the list and __do_request() adds requests back to the list. Signed-off-by:
Yan, Zheng <zheng.z.yan@intel.com> Signed-off-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Yan, Zheng authored
(cherry picked from commit 5e62ad30) The cap from non-auth mds doesn't have a meaningful max_size value. Signed-off-by:
Yan, Zheng <zheng.z.yan@intel.com> Signed-off-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit 61c74035) In __unregister_linger_request(), the request is being removed from the osd client's req_linger list only when the request has a non-null osd pointer. It should be done whether or not the request currently has an osd. This is most likely a non-issue because I believe the request will always have an osd when this function is called. Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit 2fd82b9e) RBD_MAX_SEG_NAME_LEN represents the maximum length of an rbd object name (i.e., one of the objects providing storage backing an rbd image). Another symbol, MAX_OBJ_NAME_SIZE, is used in the osd client code to define the maximum length of any object name in an osd request. Right now they disagree, with RBD_MAX_SEG_NAME_LEN being too big. There's no real benefit at this point to defining the rbd object name length limit separate from any other object name, so just get rid of RBD_MAX_SEG_NAME_LEN and use MAX_OBJ_NAME_SIZE in its place. Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit 42382b70) There is no check in rbd_remove() to see if anybody holds open the image being removed. That's not cool. Add a simple open count that goes up and down with opens and closes (releases) of the device, and don't allow an rbd image to be removed if the count is non-zero. Protect the updates of the open count value with ctl_mutex to ensure the underlying rbd device doesn't get removed while concurrently being opened. Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit 41f38c2b) If rbd_dev_snaps_update() has ever been called for an rbd device structure there could be snapshot structures on its snaps list. In rbd_add(), this function is called but a subsequent error path neglected to clean up any of these snapshots. Add a call to rbd_remove_all_snaps() in the appropriate spot to remedy this. Change a couple of error labels to be a little clearer while there. Drop the leading underscores from the function name; there's nothing special about that function that they might signify. As suggested in review, the leading underscores in __rbd_remove_snap_dev() have been removed as well. Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Josh Durgin <josh.durgin@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit d4b125e9) Change RBD_MAX_SNAP_NAME_LEN to be based on NAME_MAX. That is a practical limit for the length of a snapshot name (based on the presence of a directory using the name under /sys/bus/rbd to represent the snapshot). The /sys entry is created by prefixing it with "snap_"; define that prefix symbolically, and take its length into account in defining the snapshot name length limit. Enforce the limit in rbd_add_parse_args(). Also delete a dout() call in that function that was not meant to be committed. Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Dan Mick <dan.mick@inktank.com> Reviewed-by:
Josh Durgin <josh.durgin@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit be466c1c) The name of the "read-only" mapping option was inadvertently changed in this commit: f84344f3 rbd: separate mapping info in rbd_dev Revert that hunk to return it to what it should be. Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Dan Mick <dan.mick@inktank.com> Reviewed-by:
Josh Durgin <josh.durgin@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit a0ea3a40) When rbd_dev_probe() calls rbd_dev_image_id() it expects to get a 0 return code if successful, but it is getting a positive value. The reason is that rbd_dev_image_id() returns the value it gets from rbd_req_sync_exec(), which returns the number of bytes read in as a result of the request. (This ultimately comes from ceph_copy_from_page_vector() in rbd_req_sync_op()). Force the return value to 0 when successful in rbd_dev_image_id(). Do the same in rbd_dev_v2_object_prefix(). Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Josh Durgin <josh.durgin@inktank.com> Reviewed-by:
Dan Mick <dan.mick@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit b213e0b1) In rbd_dev_id_put(), there's a loop that's intended to determine the maximum device id in use. But it isn't doing that at all, the effect of how it's written is to simply use the just-put id number, which ignores whole purpose of this function. Fix the bug. Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Josh Durgin <josh.durgin@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit 7d5f2481) In __unregister_request(), there is a call to list_del_init() referencing a request that was the subject of a call to ceph_osdc_put_request() on the previous line. This is not safe, because the request structure could have been freed by the time we reach the list_del_init(). Fix this by reversing the order of these lines. Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-off-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sage Weil authored
(cherry picked from commit 83aff95e) This would reset a connection with any OSD that had an outstanding request that was taking more than N seconds. The idea was that if the OSD was buggy, the client could compensate by resending the request. In reality, this only served to hide server bugs, and we haven't actually seen such a bug in quite a while. Moreover, the userspace client code never did this. More importantly, often the request is taking a long time because the OSD is trying to recover, or overloaded, and killing the connection and retrying would only make the situation worse by giving the OSD more work to do. Signed-off-by:
Sage Weil <sage@inktank.com> Reviewed-by:
Alex Elder <elder@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit 685a7555) If an osd has no requests and no linger requests, __reset_osd() will just remove it with a call to __remove_osd(). That drops a reference to the osd, and therefore the osd may have been free by the time __reset_osd() returns. That function offers no indication this may have occurred, and as a result the osd will continue to be used even when it's no longer valid. Change__reset_osd() so it returns an error (ENODEV) when it deletes the osd being reset. And change __kick_osd_requests() so it returns immediately (before referencing osd again) if __reset_osd() returns *any* error. Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sage Weil authored
(cherry picked from commit 0ed7285e) Ensure that we set the err value correctly so that we do not pass a 0 value to ERR_PTR and confuse the calling code. (In particular, osd_client.c handle_map() will BUG(!newmap)). Signed-off-by:
Sage Weil <sage@inktank.com> Reviewed-by:
Alex Elder <elder@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sage Weil authored
(cherry picked from commit 0fa6ebc6) We should not set con->state to CLOSED here; that happens in ceph_fault() in the caller, where it first asserts that the state is not yet CLOSED. Avoids a BUG when the features don't match. Since the fail_protocol() has become a trivial wrapper, replace calls to it with direct calls to reset_connection(). Signed-off-by:
Sage Weil <sage@inktank.com> Reviewed-by:
Alex Elder <elder@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Vetter authored
commit 48e85834 upstream. This reverts commit 9756fe38. The bogus lvds output is actually a lvds->hdmi bridge, which we don't really support. But unconditionally disabling it breaks some existing setups. Reported-by:
John Tapsell <johnflux@gmail.com> References: http://permalink.gmane.org/gmane.comp.freedesktop.xorg.drivers.intel/17237Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Luis Henriques <luis.henriques@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit 122070a2) A number of assertions in the ceph messenger are implemented with BUG_ON(), killing the system if connection's state doesn't match what's expected. At this point our state model is (evidently) not well understood enough for these assertions to trigger a BUG(). Convert all BUG_ON(con->state...) calls to be WARN_ON(con->state...) so we learn about these issues without killing the machine. We now recognize that a connection fault can occur due to a socket closure at any time, regardless of the state of the connection. So there is really nothing we can assert about the state of the connection at that point so eliminate that assertion. Reported-by:
Ugis <ugis22@gmail.com> Tested-by:
Ugis <ugis22@gmail.com> Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit e6d50f67) When ceph_osdc_handle_map() is called to process a new osd map, kick_requests() is called to ensure all affected requests are updated if necessary to reflect changes in the osd map. This happens in two cases: whenever an incremental map update is processed; and when a full map update (or the last one if there is more than one) gets processed. In the former case, the kick_requests() call is followed immediately by a call to reset_changed_osds() to ensure any connections to osds affected by the map change are reset. But for full map updates this isn't done. Both cases should be doing this osd reset. Rather than duplicating the reset_changed_osds() call, move it into the end of kick_requests(). Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit ab60b16d) The kick_requests() function is called by ceph_osdc_handle_map() when an osd map change has been indicated. Its purpose is to re-queue any request whose target osd is different from what it was when it was originally sent. It is structured as two loops, one for incomplete but registered requests, and a second for handling completed linger requests. As a special case, in the first loop if a request marked to linger has not yet completed, it is moved from the request list to the linger list. This is as a quick and dirty way to have the second loop handle sending the request along with all the other linger requests. Because of the way it's done now, however, this quick and dirty solution can result in these incomplete linger requests never getting re-sent as desired. The problem lies in the fact that the second loop only arranges for a linger request to be sent if it appears its target osd has changed. This is the proper handling for *completed* linger requests (it avoids issuing the same linger request twice to the same osd). But although the linger requests added to the list in the first loop may have been sent, they have not yet completed, so they need to be re-sent regardless of whether their target osd has changed. The first required fix is we need to avoid calling __map_request() on any incomplete linger request. Otherwise the subsequent __map_request() call in the second loop will find the target osd has not changed and will therefore not re-send the request. Second, we need to be sure that a sent but incomplete linger request gets re-sent. If the target osd is the same with the new osd map as it was when the request was originally sent, this won't happen. This can be fixed through careful handling when we move these requests from the request list to the linger list, by unregistering the request *before* it is registered as a linger request. This works because a side-effect of unregistering the request is to make the request's r_osd pointer be NULL, and *that* will ensure the second loop actually re-sends the linger request. Processing of such a request is done at that point, so continue with the next one once it's been moved. Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit c89ce05e) In kick_requests(), we need to register the request before we unregister the linger request. Otherwise the unregister will reset the request's osd pointer to NULL. Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit a978fa20) The red-black node in the ceph osd request structure is initialized in ceph_osdc_alloc_request() using rbd_init_node(). We do need to initialize this, because in __unregister_request() we call RB_EMPTY_NODE(), which expects the node it's checking to have been initialized. But rb_init_node() is apparently overkill, and may in fact be on its way out. So use RB_CLEAR_NODE() instead. For a little more background, see this commit: 4c199a93 rbtree: empty nodes have no color" Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit 3ee5234d) The red-black node node in the ceph osd event structure is not initialized in create_osdc_create_event(). Because this node can be the subject of a RB_EMPTY_NODE() call later on, we should ensure the node is initialized properly for that. Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit f407731d) The red-black node node in the ceph osd structure is not initialized in create_osd(). Because this node can be the subject of a RB_EMPTY_NODE() call later on, we should ensure the node is initialized properly for that. Add a call to RB_CLEAR_NODE() initialize it. Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit 28362986) When a connection's socket disconnects, or if there's a protocol error of some kind on the connection, a fault is signaled and the connection is reset (closed and reopened, basically). We currently get an error message on the log whenever this occurs. A ceph connection will attempt to reestablish a socket connection repeatedly if a fault occurs. This means that these error messages will get repeatedly added to the log, which is undesirable. Change the error message to be a warning, so they don't get logged by default. Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Elder authored
(cherry picked from commit 7bb21d68) A connection's socket can close for any reason, independent of the state of the connection (and without irrespective of the connection mutex). As a result, the connectino can be in pretty much any state at the time its socket is closed. Handle those other cases at the top of con_work(). Pull this whole block of code into a separate function to reduce the clutter. Signed-off-by:
Alex Elder <elder@inktank.com> Reviewed-by:
Sage Weil <sage@inktank.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Graf authored
commit e43a0287 upstream. When remembering the direction of a DCR transaction, we should write to the same variable that we interpret on later when doing vcpu_run again. Signed-off-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
CAI Qian <caiqian@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Vetter authored
commit 539526b4 upstream. We've originally added this in commit 291427f5 Author: Jesse Barnes <jbarnes@virtuousgeek.org> Date: Fri Jul 29 12:42:37 2011 -0700 drm/i915: apply phase pointer override on SNB+ too and then copy-pasted it over to ivb/ppt. The w/a was originally added for ilk/ibx in commit 5b2adf89 Author: Jesse Barnes <jbarnes@virtuousgeek.org> Date: Thu Oct 7 16:01:15 2010 -0700 drm/i915: add Ironlake clock gating workaround for FDI link training and fixed up a bit in commit 6f06ce18 Author: Jesse Barnes <jbarnes@virtuousgeek.org> Date: Tue Jan 4 15:09:38 2011 -0800 drm/i915: set phase sync pointer override enable before setting phase sync pointer It turns out that this w/a isn't actually required on cpt/ppt and positively harmful on ivb/ppt when using fdi B/C links - it results in a black screen occasionally, with seemingfully everything working as it should. The only failure indication I've found in the hw is that eventually (but not right after the modeset completes) a pipe underrun is signalled. Big thanks to Arthur Runyan for all the ideas for registers to check and changes to test, otherwise I couldn't ever have tracked this down! Reviewed-by:
Jesse Barnes <jbarnes@virtuousgeek.org> Cc: "Runyan, Arthur J" <arthur.j.runyan@intel.com> Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by:
CAI Qian <caiqian@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Konstantin Khlebnikov authored
commit 311bd842 upstream. This patch fixes use-after-free and double-free bugs in edac_mc_sysfs_exit(). mci_pdev has single reference and put_device() calls mc_attr_release() which calls kfree(). The following device_del() works with already released memory. An another kfree() in edac_mc_sysfs_exit() releses the same memory again. Great. Signed-off-by:
Konstantin Khlebnikov <khlebnikov@openvz.org> Cc: Denis Kirjanov <kirjanov@gmail.com> Cc: Mauro Carvalho Chehab <mchehab@redhat.com> Link: http://lkml.kernel.org/r/20121214110310.11019.21098.stgit@zurgSigned-off-by:
Borislav Petkov <bp@alien8.de> [ a partial 3.7.y backport ] Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stanislaw Gruszka authored
commit ab9d6e4f upstream. This revert: commit be03d4a4 Author: Andreas Hartmann <andihartmann@01019freenet.de> Date: Tue Apr 17 00:25:28 2012 +0200 rt2x00: Don't let mac80211 send a BAR when an AMPDU subframe fails To fix problem workaround by above commit use IEEE80211_HW_TEARDOWN_AGGR_ON_BAR_FAIL flag (see change log for "mac80211: introduce IEEE80211_HW_TEARDOWN_AGGR_ON_BAR_FAIL" patch). Resolve: https://bugzilla.kernel.org/show_bug.cgi?id=42828Bisected-by:
Francisco Pina Martins <f.pinamartins@gmail.com> Signed-off-by:
Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by:
Johannes Berg <johannes.berg@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Joe Thornber authored
commit e8088073 upstream. There is a race when discard bios and non-discard bios are issued simultaneously to the same block. Discard support is expensive for all thin devices precisely because you have to be careful to quiesce the area you're discarding. DM thin must handle this conflicting IO pattern (simultaneous non-discard vs discard) even though a sane application shouldn't be issuing such IO. The race manifests as follows: 1. A non-discard bio is mapped in thin_bio_map. This doesn't lock out parallel activity to the same block. 2. A discard bio is issued to the same block as the non-discard bio. 3. The discard bio is locked in a dm_bio_prison_cell in process_discard to lock out parallel activity against the same block. 4. The non-discard bio's mapping continues and its all_io_entry is incremented so the bio is accounted for in the thin pool's all_io_ds which is a dm_deferred_set used to track time locality of non-discard IO. 5. The non-discard bio is finally locked in a dm_bio_prison_cell in process_bio. The race can result in deadlock, leaving the block layer hanging waiting for completion of a discard bio that never completes, e.g.: INFO: task ruby:15354 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. ruby D ffffffff8160f0e0 0 15354 15314 0x00000000 ffff8802fb08bc58 0000000000000082 ffff8802fb08bfd8 0000000000012900 ffff8802fb08a010 0000000000012900 0000000000012900 0000000000012900 ffff8802fb08bfd8 0000000000012900 ffff8803324b9480 ffff88032c6f14c0 Call Trace: [<ffffffff814e5a19>] schedule+0x29/0x70 [<ffffffff814e3d85>] schedule_timeout+0x195/0x220 [<ffffffffa06b9bc1>] ? _dm_request+0x111/0x160 [dm_mod] [<ffffffff814e589e>] wait_for_common+0x11e/0x190 [<ffffffff8107a170>] ? try_to_wake_up+0x2b0/0x2b0 [<ffffffff814e59ed>] wait_for_completion+0x1d/0x20 [<ffffffff81233289>] blkdev_issue_discard+0x219/0x260 [<ffffffff81233e79>] blkdev_ioctl+0x6e9/0x7b0 [<ffffffff8119a65c>] block_ioctl+0x3c/0x40 [<ffffffff8117539c>] do_vfs_ioctl+0x8c/0x340 [<ffffffff8119a547>] ? block_llseek+0x67/0xb0 [<ffffffff811756f1>] sys_ioctl+0xa1/0xb0 [<ffffffff810561f6>] ? sys_rt_sigprocmask+0x86/0xd0 [<ffffffff814ef099>] system_call_fastpath+0x16/0x1b The thinp-test-suite's test_discard_random_sectors reliably hits this deadlock on fast SSD storage. The fix for this race is that the all_io_entry for a bio must be incremented whilst the dm_bio_prison_cell is held for the bio's associated virtual and physical blocks. That cell locking wasn't occurring early enough in thin_bio_map. This patch fixes this. Care is taken to always call the new function inc_all_io_entry() with the relevant cells locked, but they are generally unlocked before calling issue() to try to avoid holding the cells locked across generic_submit_request. Also, now that thin_bio_map may lock bios in a cell, process_bio() is no longer the only thread that will do so. Because of this we must be sure to use cell_defer_except() to release all non-holder entries, that were added by the other thread, because they must be deferred. This patch depends on "dm thin: replace dm_cell_release_singleton with cell_defer_except". Signed-off-by:
Joe Thornber <ejt@redhat.com> Signed-off-by:
Mike Snitzer <snitzer@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ralf Baechle authored
commit 91209635 upstream. This reverts commit ff401e52. This breaks on MIPS64 R2 cores such as Broadcom's. Signed-off-by:
Jayachandran C <jchandra@broadcom.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
commit 81d0a6ae upstream. Use DIV_ROUND_UP to prevent truncation by integer division issue. This ensures we return enough delay time. Signed-off-by:
Axel Lin <axel.lin@ingics.com> Signed-off-by:
Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
commit adf6178a upstream. Integer division may truncate. This happens when pdata->buckx_voltagex setting is not align with 1000 uV. Thus use uV in voltage_map_desc, this ensures the selected voltage won't less than pdata buckx_voltagex settings. Signed-off-by:
Axel Lin <axel.lin@ingics.com> Signed-off-by:
Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
commit bc3b7756 upstream. Current code does integer division (min_vol = min_uV / 1000) before pass min_vol to max8997_get_voltage_proper_val(). So it is possible min_vol is truncated to a smaller value. For example, if the request min_uV is 800900 for ldo. min_vol = 800900 / 1000 = 800 (mV) Then max8997_get_voltage_proper_val returns 800 mV for this case which is lower than the requested voltage. Use uV rather than mV in voltage_map_desc to prevent truncation by integer division. Signed-off-by:
Axel Lin <axel.lin@ingics.com> Signed-off-by:
Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Beulich authored
commit 75e1a2ae upstream. Debug port in-use determination must be done before the controller gets reset the first time, i.e. before the call to ehci_setup() as of commit 1a49e2ac. That commit effectively rendered commit 9fa5780b useless. While moving that code around, also fix the BAR determination - the respective capability field is a 3- rather than a 2-bit one -, and use PCI_CAP_ID_DBG instead of the literal 0x0a. It's unclear to me whether the debug port functionality is important enough to warrant fixing this in stable kernels too. Signed-off-by:
Jan Beulich <jbeulich@suse.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by:
Alan Stern <stern@rowland.harvard.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sarah Sharp authored
commit 55c1945e upstream. A high speed control or bulk endpoint may have bInterval set to zero, which means it does not NAK. If bInterval is non-zero, it means the endpoint NAKs at a rate of 2^(bInterval - 1). The xHCI code to compute the NAK interval does not handle the special case of zero properly. The current code unconditionally subtracts one from bInterval and uses it as an exponent. This causes a very large bInterval to be used, and warning messages like these will be printed: usb 1-1: ep 0x1 - rounding interval to 32768 microframes, ep desc says 0 microframes This may cause the xHCI host hardware to reject the Configure Endpoint command, which means the HS device will be unusable under xHCI ports. This patch should be backported to kernels as old as 2.6.31, that contain commit dfa49c4a "USB: xhci - fix math in xhci_get_endpoint_interval()". Reported-by:
Vincent Pelletier <plr.vincent@gmail.com> Suggested-by:
Alan Stern <stern@rowland.harvard.edu> Signed-off-by:
Sarah Sharp <sarah.a.sharp@linux.intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Oliver Neukum authored
commit 07e72b95 upstream. Some touchscreens have buggy firmware which claims remote wakeup to be enabled after a reset. They nevertheless crash if the feature is cleared by the host. Add a check for reset resume before checking for an enabled remote wakeup feature. On compliant devices the feature must be cleared after a reset anyway. Signed-off-by:
Oliver Neukum <oneukum@suse.de> Acked-by:
Alan Stern <stern@rowland.harvard.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sarah Sharp authored
commit c52804a4 upstream. The USB core hub thread (khubd) is designed with external USB hubs in mind. It expects that if a port status change bit is set, the hub will continue to send a notification through the hub status data transfer. Basically, it expects hub notifications to be level-triggered. The xHCI host controller is designed to be edge-triggered on the logical 'OR' of all the port status change bits. When all port status change bits are clear, and a new change bit is set, the xHC will generate a Port Status Change Event. If another change bit is set in the same port status register before the first bit is cleared, it will not send another event. This means that the hub code may lose port status changes because of race conditions between clearing change bits. The user sees this as a "dead port" that doesn't react to device connects. The fix is to turn on port polling whenever a new change bit is set. Once the USB core issues a hub status request that shows that no change bits are set in any USB ports, turn off port polling. We can't allow the USB core to poll the roothub for port events during host suspend because if the PCI host is in D3cold, the port registers will be all f's. Instead, stop the port polling timer, and unconditionally restart it when the host resumes. If there are no port change bits set after the resume, the first call to hub_status_data will disable polling. This patch should be backported to stable kernels with the first xHCI support, 2.6.31 and newer, that include the commit 0f2a7930 "USB: xhci: Root hub support." There will be merge conflicts because the check for HC_STATE_SUSPENDED was moved into xhci_suspend in 3.8. Signed-off-by:
Sarah Sharp <sarah.a.sharp@linux.intel.com> Acked-by:
Alan Stern <stern@rowland.harvard.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sarah Sharp authored
commit 65bdac5e upstream. An empty port can transition to either Inactive or Compliance Mode if a newly connected USB 3.0 device fails to link train. In that case, we issue a warm reset. Some devices, such as John's Roseweil eusb3 enclosure, slip back into Compliance Mode after the warm reset. The current warm reset code does not check for device connect status on warm reset completion, and it incorrectly reports the warm reset succeeded. This causes the USB core to attempt to send a Set Address control transfer to a port in Compliance Mode, which will always fail. Make hub_port_wait_reset check the current connect status and link state after the warm reset completes. Return a failure status if the device is disconnected or the link state is Compliance Mode or SS.Inactive. Make hub_events disable the port if warm reset fails. This will disable the port, and then bring it back into the RxDetect state. Make the USB core ignore the connect change until the device reconnects. Note that this patch does NOT handle connected devices slipping into the Inactive state very well. This is a concern, because devices can go into the Inactive state on U1/U2 exit failure. However, the fix for that case is too large for stable, so it will be submitted in a separate patch. This patch should be backported to kernels as old as 3.2, contain the commit ID 75d7cf72 "usbcore: refine warm reset logic" Signed-off-by:
Sarah Sharp <sarah.a.sharp@linux.intel.com> Acked-by:
Alan Stern <stern@rowland.harvard.edu> Reported-by:
John Covici <covici@ccs.covici.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-