- 20 Jun, 2017 38 commits
-
-
Fabio Estevam authored
clk_prepare_enable() may fail, so we should better check its return value and propagate it in the case of error. Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Wu Fengguang authored
drivers/mmc/core/block.c:1929:3-4: Unneeded semicolon Remove unneeded semicolon. Generated by: scripts/coccinelle/misc/semicolon.cocci CC: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Ulf Hansson authored
The only reason to why the mmc block device driver needs to implements its own version of how to get the status of the card, is that it needs to specify a different amount of retries. Therefore add a new exported function which allows the caller to specify the number of retries and convert everybody to use it, as this simplifies the code. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
-
Linus Walleij authored
This moves the boot partition lock command (issued from sysfs) into a custom block layer request, just like the ioctl()s, getting rid of yet another instance of mmc_get_card(). Since we now have two operations issuing special DRV_OP's, we rename the result variable ->drv_op_result. Tested by locking the boot partition from userspace: > cd /sys/devices/platform/soc/80114000.sdi4_per2/mmc_host/mmc3/ mmc3:0001/block/mmcblk3/mmcblk3boot0 > echo 1 > ro_lock_until_next_power_on [ 178.645324] mmcblk3boot1: Locking boot partition ro until next power on [ 178.652221] mmcblk3boot0: Locking boot partition ro until next power on Also tested this with a huge dd job in the background: it is now possible to lock the boot partitions on the card even under heavy I/O. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Linus Walleij authored
We will need to access static functions above the pure block layer operations in the file, so move the driver operations issue function down so we can see all non-blocklayer symbols. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Linus Walleij authored
We will expand the DRV_OP usage, so we need to know which operation we're performing. Tag the operations with an enum:ed type and rename the function so it is clear that it deals with any command and put a switch statement in it. Currently only ioctls are supported. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Linus Walleij authored
Just as we can use blk_mq_rq_from_pdu() to get the per-request tag we can use blk_mq_rq_to_pdu() to get a request from a tag. Introduce a static inline helper so we are on the clear what is happening. Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Wolfram Sang authored
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Wolfram Sang authored
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Wolfram Sang authored
Plain code move with no changes. Needed for refactoring. Also, looks nicer if request and finish_request are next to each other. Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Wolfram Sang authored
The obviously wrong comment was added in 2011 with commit df3ef2d3 ("mmc: protect the tmio_mmc driver against a theoretical race") but already obsoleted half a year later with commit b9269fdd ("mmc: tmio: fix recursive spinlock, don't schedule with interrupts disabled"). Fixes: b9269fdd ("mmc: tmio: fix recursive spinlock, don't schedule with interrupts disabled") Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Wolfram Sang authored
Split handling mrq into a seperate function. We need to call it from another place soon. Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Wolfram Sang authored
This part confused me and I had to read it twice until I got it. Let's follow the standard pattern to bail out if something is wrong and keep in the body of the function when everything is as expected. Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Markus Elfring authored
Omit an extra message for memory allocation failures. This issue was detected by using the Coccinelle software. Link: http://events.linuxfoundation.org/sites/events/files/slides/LCJ16-Refactor_Strings-WSang_0.pdfSigned-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Markus Elfring authored
Omit an extra message for a memory allocation failure. This issue was detected by using the Coccinelle software. Link: http://events.linuxfoundation.org/sites/events/files/slides/LCJ16-Refactor_Strings-WSang_0.pdfSigned-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Linus Walleij authored
commit cdf8a6fb "mmc: block: Introduce queue semantics" deleted the last user of mmc_req_is_special() and it was a horrible hack to classify requests as "special" or "not special" to begin with, so delete the helper. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Linus Walleij authored
This switches also the multiple-command ioctl() call to issue all ioctl()s through the block layer instead of going directly to the device. We extend the passed argument with an argument count and loop over all passed commands in the ioctl() issue function called from the block layer. By doing this we are again loosening the grip on the big host lock, since two calls to mmc_get_card()/mmc_put_card() are removed. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Avri Altman <Avri.Altman@sandisk.com>
-
Linus Walleij authored
This wraps single ioctl() commands into block requests using the custom block layer request types REQ_OP_DRV_IN and REQ_OP_DRV_OUT. By doing this we are loosening the grip on the big host lock, since two calls to mmc_get_card()/mmc_put_card() are removed. We are storing the ioctl() in/out argument as a pointer in the per-request struct mmc_blk_request container. Since we now let the block layer allocate this data, blk_get_request() will allocate it for us and we can immediately dereference it and use it to pass the argument into the block layer. We refactor the if/else/if/else ladder in mmc_blk_issue_rq() as part of the job, keeping some extra attention to the case when a NULL req is passed into this function and making that pipeline flush more explicit. Tested on the ux500 with the userspace: mmc extcsd read /dev/mmcblk3 resulting in a successful EXTCSD info dump back to the console. This commit fixes a starvation issue in the MMC/SD stack that can be easily provoked in the following way by issueing the following commands in sequence: > dd if=/dev/mmcblk3 of=/dev/null bs=1M & > mmc extcs read /dev/mmcblk3 Before this patch, the extcsd read command would hang (starve) while waiting for the dd command to finish since the block layer was holding the card/host lock. After this patch, the extcsd ioctl() command is nicely interpersed with the rest of the block commands and we can issue a bunch of ioctl()s from userspace while there is some busy block IO going on without any problems. Conversely userspace ioctl()s can no longer starve the block layer by holding the card/host lock. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Avri Altman <Avri.Altman@sandisk.com>
-
Linus Walleij authored
The variable is_rpmb is clearly a bool and even assigned true and false, yet declared as an int. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Linus Walleij authored
The mmc_queue_req is a per-request state container the MMC core uses to carry bounce buffers, pointers to asynchronous requests and so on. Currently allocated as a static array of objects, then as a request comes in, a mmc_queue_req is assigned to it, and used during the lifetime of the request. This is backwards compared to how other block layer drivers work: they usally let the block core provide a per-request struct that get allocated right beind the struct request, and which can be obtained using the blk_mq_rq_to_pdu() helper. (The _mq_ infix in this function name is misleading: it is used by both the old and the MQ block layer.) The per-request struct gets allocated to the size stored in the queue variable .cmd_size initialized using the .init_rq_fn() and cleaned up using .exit_rq_fn(). The block layer code makes the MMC core rely on this mechanism to allocate the per-request mmc_queue_req state container. Doing this make a lot of complicated queue handling go away. We only need to keep the .qnct that keeps count of how many request are currently being processed by the MMC layer. The MQ block layer will replace also this once we transition to it. Doing this refactoring is necessary to move the ioctl() operations into custom block layer requests tagged with REQ_OP_DRV_[IN|OUT] instead of the custom code using the BigMMCHostLock that we have today: those require that per-request data be obtainable easily from a request after creating a custom request with e.g.: struct request *rq = blk_get_request(q, REQ_OP_DRV_IN, __GFP_RECLAIM); struct mmc_queue_req *mq_rq = req_to_mq_rq(rq); And this is not possible with the current construction, as the request is not immediately assigned the per-request state container, but instead it gets assigned when the request finally enters the MMC queue, which is way too late for custom requests. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> [Ulf: Folded in the fix to drop a call to blk_cleanup_queue()] Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Heiner Kallweit <hkallweit1@gmail.com>
-
Linus Walleij authored
This option is activated by all multiplatform configs and what not so we almost always have it turned on, and the memory it saves is negligible, even more so moving forward. The actual bounce buffer only gets allocated only when used, the only thing the ifdefs are saving is a little bit of code. It is highly improper to have this as a Kconfig option that get turned on by Kconfig, make this a pure runtime-thing and let the host decide whether we use bounce buffers. We add a new property "disable_bounce" to the host struct. Notice that mmc_queue_calc_bouncesz() already disables the bounce buffers if host->max_segs != 1, so any arch that has a maximum number of segments higher than 1 will have bounce buffers disabled. The option CONFIG_MMC_BLOCK_BOUNCE is default y so the majority of platforms in the kernel already have it on, and it then gets turned off at runtime since most of these have a host->max_segs > 1. The few exceptions that have host->max_segs == 1 and still turn off the bounce buffering are those that disable it in their defconfig. Those are the following: arch/arm/configs/colibri_pxa300_defconfig arch/arm/configs/zeus_defconfig - Uses MMC_PXA, drivers/mmc/host/pxamci.c - Sets host->max_segs = NR_SG, which is 1 - This needs its bounce buffer deactivated so we set host->disable_bounce to true in the host driver arch/arm/configs/davinci_all_defconfig - Uses MMC_DAVINCI, drivers/mmc/host/davinci_mmc.c - This driver sets host->max_segs to MAX_NR_SG, which is 16 - That means this driver anyways disabled bounce buffers - No special action needed for this platform arch/arm/configs/lpc32xx_defconfig arch/arm/configs/nhk8815_defconfig arch/arm/configs/u300_defconfig - Uses MMC_ARMMMCI, drivers/mmc/host/mmci.[c|h] - This driver by default sets host->max_segs to NR_SG, which is 128, unless a DMA engine is used, and in that case the number of segments are also > 1 - That means this driver already disables bounce buffers - No special action needed for these platforms arch/arm/configs/sama5_defconfig - Uses MMC_SDHCI, MMC_SDHCI_PLTFM, MMC_SDHCI_OF_AT91, MMC_ATMELMCI - Uses drivers/mmc/host/sdhci.c - Normally sets host->max_segs to SDHCI_MAX_SEGS which is 128 and thus disables bounce buffers - Sets host->max_segs to 1 if SDHCI_USE_SDMA is set - SDHCI_USE_SDMA is only set by SDHCI on PCI adapers - That means that for this platform bounce buffers are already disabled at runtime - No special action needed for this platform arch/blackfin/configs/CM-BF533_defconfig arch/blackfin/configs/CM-BF537E_defconfig - Uses MMC_SPI (a simple MMC card connected on SPI pins) - Uses drivers/mmc/host/mmc_spi.c - Sets host->max_segs to MMC_SPI_BLOCKSATONCE which is 128 - That means this platform already disables bounce buffers at runtime - No special action needed for these platforms arch/mips/configs/cavium_octeon_defconfig - Uses MMC_CAVIUM_OCTEON, drivers/mmc/host/cavium.c - Sets host->max_segs to 16 or 1 - Setting host->disable_bounce to be sure for the 1 case arch/mips/configs/qi_lb60_defconfig - Uses MMC_JZ4740, drivers/mmc/host/jz4740_mmc.c - This sets host->max_segs to 128 so bounce buffers are already runtime disabled - No action needed for this platform It would be interesting to come up with a list of the platforms that actually end up using bounce buffers. I have not been able to infer such a list, but it occurs when host->max_segs == 1 and the bounce buffering is not explicitly disabled. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Simon Horman authored
Make renesas_sdhi_sys_dmac.c a top-level module file that makes use of library code supplied by renesas_sdhi_core.c This is in order to facilitate adding other variants of SDHI; in particular SDHI using different DMA controllers. Signed-off-by: Simon Horman <horms+renesas@verge.net.au> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> [Arnd: Fixed module build error] Signed-off-by: Arnd Bergmann <arnd@arndb.de>
-
Simon Horman authored
Rename the source file SDHI. A follow-up patch will make it a library file used by a different top-level module file. The name "renesas" is chosen as the SDHI driver is applicable to a wider range of SoCs than SH-Mobile it seems to be a more appropriate name. However, the SDHI driver source itself, is left as sh_mobile_sdhi to avoid unnecessary churn. the name "core" was chosen to reflect the desired role of this file, to provide core functionality to the sdhi driver. A follow-up patch will move the file into that role. Internal symbols have also been renamed to reflect the filename change. The .name member of struct platform_driver and parameter to MODULE_ALIAS() have not been changed in order to avoid the complication of potentially breaking SH SoCs which still use platform drivers. Signed-off-by: Simon Horman <horms+renesas@verge.net.au> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Simon Horman authored
Rename the source file for DMA for SDHI as a follow-up to attaching DMA code to the SDHI driver rather than the tmio_core driver. The name "renesas" is chosen as the SDHI driver is applicable to a wider range of SoCs than SH-Mobile it seems to be a more appropriate name. However, the SDHI driver source itself, is left as sh_mobile_sdhi to avoid unnecessary churn. The name sys_dmac was chosen to reflect the type of DMA used. Internal symbols have also been renamed to reflect the filename change. A follow-up patch will re-organise the SDHI driver removing the need for renesas_sdhi_get_dma_ops(). Signed-off-by: Simon Horman <horms+renesas@verge.net.au> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Simon Horman authored
Rename tmio_mmc_pio.c to tmio_mmc_core.c to more accurately reflect its function: to provide core code for the tmio-mmc and sh-mobole-sdhi drivers. Signed-off-by: Simon Horman <horms+renesas@verge.net.au> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Simon Horman authored
Refactor DMA support to allow it to be provided by a set of call-backs that are provided by a host driver. The motivation is to allow multiple DMA implementations to be provided and instantiated at run-time. Instantiate the existing DMA implementation from the sh_mobile_sdhi driver which appears to match the current use-case. This has the side effect of moving the DMA code from the tmio_core to the sh_mobile_sdhi driver. A follow-up patch will change the source file for the SDHI DMA implementation accordingly. Another follow-up patch will re-organise the SDHI driver removing the need for tmio_mmc_get_dma_ops(). Signed-off-by: Simon Horman <horms+renesas@verge.net.au> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Simon Horman authored
Reshuffle the comment at the top of the source dropping filenames and moving up human readable strings. This seems to be somewhat more useful information to start the source file with. It is also less fragile, f.e. to file renames. Signed-off-by: Simon Horman <horms+renesas@verge.net.au> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Ulf Hansson authored
This reverts commit a6db2c86 ("mmc: dw_mmc: Don't allow Runtime PM for SDIO cards")' As dw_mmc now is capable of preventing runtime PM suspend while SDIO IRQs are enabled, let's drop the less fine-grained method, which is preventing runtime PM suspend for all SDIO cards - no matter of whether SDIO IRQs are being enabled or not. In this way we don't keep the host runtime PM resumed, unless it's really needed, thus avoiding to waste power. Especially when SDIO IRQs is supported via a separate out-of-band IRQ line, which isn't defined by the SDIO standard, typically the SDIO func driver doesn't enable SDIO IRQs via sdio_claim_irq(). So, for these cases we can now allow the dwmmc device to be runtime PM suspended in-between requests. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Douglas Anderson <dianders@chromium.org>
-
Ulf Hansson authored
To be able to handle SDIO IRQs the dw_mmc device needs to be powered and providing clock to the SDIO card. Therefore, we must not allow the device to be runtime PM suspended while SDIO IRQs are enabled. To fix this, let's increase the runtime PM usage count while the mmc core enables SDIO IRQs. Later when the mmc core tells dw_mmc to disable SDIO IRQs, we drop the usage count to again allow runtime PM suspend. This now becomes the default behaviour for dw_mmc. In cases where SDIO IRQs can be re-routed as GPIO wake-ups during runtime PM suspend, one could potentially allow runtime PM suspend. However, that will have to be addressed as a separate change on top of this one. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Douglas Anderson <dianders@chromium.org>
-
Ulf Hansson authored
Convert to use the more lightweight method for processing SDIO IRQs, which involves the following changes: - Enable MMC_CAP2_SDIO_IRQ_NOTHREAD when SDIO IRQ is supported and use sdio_signal_irq() instead of mmc_signal_sdio_irq(). - Mask the SDIO IRQ before signaling a new one to be processed. - Implement the ->ack_sdio_irq() callback to unmask the SDIO IRQ. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Douglas Anderson <dianders@chromium.org>
-
Ulf Hansson authored
For hosts not supporting MMC_CAP2_SDIO_IRQ_NOTHREAD but MMC_CAP_SDIO_IRQ, the SDIO IRQs are processed from a dedicated kernel thread. For these cases, the host calls mmc_signal_sdio_irq() from its ISR to signal a new SDIO IRQ. Signaling an SDIO IRQ makes the host's ->enable_sdio_irq() callback to be invoked to temporary disable the IRQs, before the kernel thread is woken up to process it. When processing of the IRQs are completed, they are re-enabled by the kernel thread, again via invoking the host's ->enable_sdio_irq(). The observation from this, is that the execution path is being unnecessary complex, as the host driver already knows that it needs to temporary disable the IRQs before signaling a new one. Moreover, replacing the kernel thread with a work/workqueue would not only greatly simplify the code, but also make it more robust. To address the above problems, let's continue to build upon the support for MMC_CAP2_SDIO_IRQ_NOTHREAD, as it already implements SDIO IRQs to be processed without using the clumsy kernel thread and without the ping-pong calls of the host's ->enable_sdio_irq() callback for each processed IRQ. Therefore, let's add new API sdio_signal_irq(), which enables hosts to signal/process SDIO IRQs by using a work/workqueue, rather than using the kernel thread. Add also a new host callback ->ack_sdio_irq(), which the work invokes when the SDIO IRQs have been processed. This informs the host about when it shall re-enable the SDIO IRQs. Potentially, we could re-use the existing ->enable_sdio_irq() callback instead of adding a new one, however it has turned out that it's more convenient for hosts to get this information via a separate callback. Hosts that wants to use this new method to signal/process SDIO IRQs, must enable MMC_CAP2_SDIO_IRQ_NOTHREAD and implement the ->ack_sdio_irq() callback. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Douglas Anderson <dianders@chromium.org>
-
Ulf Hansson authored
In cases when MMC_CAP2_SDIO_IRQ_NOTHREAD is set, there is a minor window for when the mmc host could call sdio_run_irqs(), while in fact an SDIO func driver could have decided to released the SDIO IRQ via a call to sdio_release_irq(). In this scenario, processing of the SDIO IRQs are done even if there is none IRQ claimed, which is not what we want. To prevent this from happen, close the window by validating that at least one SDIO IRQs is claimed, before deciding to process them. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Douglas Anderson <dianders@chromium.org>
-
Ulf Hansson authored
In case if a pwrseq-emmc has been bound to the host, a call to mmc_power_up() triggers an eMMC HW reset via the pwrseq_emmc's ->post_power_on() callback. This isn't really what we want, as mmc_power_up() is called each time when resuming the card. As a matter of fact, the current approach may also violate the eMMC spec, as the involved delays managed in pwrseq_emmc assumes both VCC and VCCQ has been turned on, which isn't the case for VCCQ, unless the regulator is always on. Fix this behaviour by aligning to the same procedure used when the mmc host implements the ->hw_reset() callback and has the MMC_CAP_HW_RESET flag set. In this way the eMMC HW reset is issued at card detection scan, to cope with bogus bootloaders and in the error recovery path via the mmc specific bus_ops->reset() callback. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
-
Ulf Hansson authored
The ->reset() callback is needed to implement a better support for eMMC HW reset. The following changes will take advantage of the new callback. Suggested-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
-
Johan Hovold authored
Add the missing endianness conversions when printing the USB device-descriptor idVendor and idProduct fields during probe. Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Andy Shevchenko authored
AVR32 is gone. Now it's time to clean up the driver by removing leftovers that was used by AVR32 related code. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Colin Ian King authored
At the end of either of the read or write loops len is always zero and hence the non-zero check on len and return of -EIO is redundant and can be removed. Detected by CoverityScan, CID#114293 ("Logically dead code") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
Shubhrajyoti Datta authored
ret is signed however is printed as unsigned fix the same. If printed as a negative number the result is easier to read. No functional change. Signed-off-by: Shubhrajyoti Datta <shubhrajyoti.datta@xilinx.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-
- 19 Jun, 2017 2 commits
-
-
Linus Torvalds authored
-
Hugh Dickins authored
Stack guard page is a useful feature to reduce a risk of stack smashing into a different mapping. We have been using a single page gap which is sufficient to prevent having stack adjacent to a different mapping. But this seems to be insufficient in the light of the stack usage in userspace. E.g. glibc uses as large as 64kB alloca() in many commonly used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX] which is 256kB or stack strings with MAX_ARG_STRLEN. This will become especially dangerous for suid binaries and the default no limit for the stack size limit because those applications can be tricked to consume a large portion of the stack and a single glibc call could jump over the guard page. These attacks are not theoretical, unfortunatelly. Make those attacks less probable by increasing the stack guard gap to 1MB (on systems with 4k pages; but make it depend on the page size because systems with larger base pages might cap stack allocations in the PAGE_SIZE units) which should cover larger alloca() and VLA stack allocations. It is obviously not a full fix because the problem is somehow inherent, but it should reduce attack space a lot. One could argue that the gap size should be configurable from userspace, but that can be done later when somebody finds that the new 1MB is wrong for some special case applications. For now, add a kernel command line option (stack_guard_gap) to specify the stack gap size (in page units). Implementation wise, first delete all the old code for stack guard page: because although we could get away with accounting one extra page in a stack vma, accounting a larger gap can break userspace - case in point, a program run with "ulimit -S -v 20000" failed when the 1MB gap was counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK and strict non-overcommit mode. Instead of keeping gap inside the stack vma, maintain the stack guard gap as a gap between vmas: using vm_start_gap() in place of vm_start (or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few places which need to respect the gap - mainly arch_get_unmapped_area(), and and the vma tree's subtree_gap support for that. Original-patch-by: Oleg Nesterov <oleg@redhat.com> Original-patch-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Hugh Dickins <hughd@google.com> Acked-by: Michal Hocko <mhocko@suse.com> Tested-by: Helge Deller <deller@gmx.de> # parisc Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-