Commit ed5dc237 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'mmc-updates-for-3.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/cjb/mmc

Pull MMC update from Chris Ball:
 "MMC highlights for 3.9:

  Core:
   - Support for packed commands in eMMC 4.5.  (This requires a host
     capability to be turned on.  It increases write throughput by 20%+,
     but may also increase average write latency; more testing needed.)
   - Add DT bindings for capability flags.
   - Add mmc_of_parse() for shared DT parsing between drivers.

  Drivers:
   - android-goldfish: New MMC driver for the Android Goldfish emulator.
   - mvsdio: Add DT bindings, pinctrl, use slot-gpio for card detection.
   - omap_hsmmc: Fix boot hangs with RPMB partitions.
   - sdhci-bcm2835: New driver for controller used by Raspberry Pi.
   - sdhci-esdhc-imx: Add 8-bit data, auto CMD23 support, use slot-gpio.
   - sh_mmcif: Add support for eMMC DDR, bundled MMCIF IRQs.
   - tmio_mmc: Add DT bindings, support for vccq regulator"

* tag 'mmc-updates-for-3.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/cjb/mmc: (92 commits)
  mmc: tegra: assume CONFIG_OF, remove platform data
  mmc: add DT bindings for more MMC capability flags
  mmc: tmio: add support for the VccQ regulator
  mmc: tmio: remove unused and deprecated symbols
  mmc: sh_mobile_sdhi: use managed resource allocations
  mmc: sh_mobile_sdhi: remove unused .pdata field
  mmc: tmio-mmc: parse device-tree bindings
  mmc: tmio-mmc: define device-tree bindings
  mmc: sh_mmcif: use mmc_of_parse() to parse standard MMC DT bindings
  mmc: (cosmetic) remove "extern" from function declarations
  mmc: provide a standard MMC device-tree binding parser centrally
  mmc: detailed definition of CD and WP MMC line polarities in DT
  mmc: sdhi, tmio: only check flags in tmio-mmc driver proper
  mmc: sdhci: Fix parameter of sdhci_do_start_signal_voltage_switch()
  mmc: sdhci: check voltage range only on regulators aware of voltage value
  mmc: bcm2835: set SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK
  mmc: support packed write command for eMMC4.5 devices
  mmc: add packed command feature of eMMC4.5
  mmc: rtsx: remove driving adjustment
  mmc: use regulator_can_change_voltage() instead of regulator_count_voltages
  ...
parents 0512c04a 0e786102
Broadcom BCM2835 SDHCI controller
This file documents differences between the core properties described
by mmc.txt and the properties that represent the BCM2835 controller.
Required properties:
- compatible : Should be "brcm,bcm2835-sdhci".
- clocks : The clock feeding the SDHCI controller.
Example:
sdhci: sdhci {
compatible = "brcm,bcm2835-sdhci";
reg = <0x7e300000 0x100>;
interrupts = <2 30>;
clocks = <&clk_mmc>;
bus-width = <4>;
};
......@@ -6,23 +6,45 @@ Interpreted by the OF core:
- reg: Registers location and length.
- interrupts: Interrupts used by the MMC controller.
Required properties:
- bus-width: Number of data lines, can be <1>, <4>, or <8>
Card detection:
If no property below is supplied, standard SDHCI card detect is used.
If no property below is supplied, host native card detect is used.
Only one of the properties in this section should be supplied:
- broken-cd: There is no card detection available; polling must be used.
- cd-gpios: Specify GPIOs for card detection, see gpio binding
- non-removable: non-removable slot (like eMMC); assume always present.
Optional properties:
- bus-width: Number of data lines, can be <1>, <4>, or <8>. The default
will be <1> if the property is absent.
- wp-gpios: Specify GPIOs for write protection, see gpio binding
- cd-inverted: when present, polarity on the cd gpio line is inverted
- wp-inverted: when present, polarity on the wp gpio line is inverted
- cd-inverted: when present, polarity on the CD line is inverted. See the note
below for the case, when a GPIO is used for the CD line
- wp-inverted: when present, polarity on the WP line is inverted. See the note
below for the case, when a GPIO is used for the WP line
- max-frequency: maximum operating clock frequency
- no-1-8-v: when present, denotes that 1.8v card voltage is not supported on
this system, even if the controller claims it is.
- cap-sd-highspeed: SD high-speed timing is supported
- cap-mmc-highspeed: MMC high-speed timing is supported
- cap-power-off-card: powering off the card is safe
- cap-sdio-irq: enable SDIO IRQ signalling on this interface
*NOTE* on CD and WP polarity. To use common for all SD/MMC host controllers line
polarity properties, we have to fix the meaning of the "normal" and "inverted"
line levels. We choose to follow the SDHCI standard, which specifies both those
lines as "active low." Therefore, using the "cd-inverted" property means, that
the CD line is active high, i.e. it is high, when a card is inserted. Similar
logic applies to the "wp-inverted" property.
CD and WP lines can be implemented on the hardware in one of two ways: as GPIOs,
specified in cd-gpios and wp-gpios properties, or as dedicated pins. Polarity of
dedicated pins can be specified, using *-inverted properties. GPIO polarity can
also be specified using the OF_GPIO_ACTIVE_LOW flag. This creates an ambiguity
in the latter case. We choose to use the XOR logic for GPIO CD and WP lines.
This means, the two properties are "superimposed," for example leaving the
OF_GPIO_ACTIVE_LOW flag clear and specifying the respective *-inverted
property results in a double-inversion and actually means the "normal" line
polarity is in effect.
Optional SDIO properties:
- keep-power-in-suspend: Preserves card power during a suspend/resume cycle
......
* Marvell orion-sdio controller
This file documents differences between the core properties in mmc.txt
and the properties used by the orion-sdio driver.
- compatible: Should be "marvell,orion-sdio"
- clocks: reference to the clock of the SDIO interface
Example:
mvsdio@d00d4000 {
compatible = "marvell,orion-sdio";
reg = <0xd00d4000 0x200>;
interrupts = <54>;
clocks = <&gateclk 17>;
status = "disabled";
};
......@@ -26,8 +26,16 @@ Required Properties:
* bus-width: as documented in mmc core bindings.
* wp-gpios: specifies the write protect gpio line. The format of the
gpio specifier depends on the gpio controller. If the write-protect
line is not available, this property is optional.
gpio specifier depends on the gpio controller. If a GPIO is not used
for write-protect, this property is optional.
* disable-wp: If the wp-gpios property isn't present then (by default)
we'd assume that the write protect is hooked up directly to the
controller's special purpose write protect line (accessible via
the WRTPRT register). However, it's possible that we simply don't
want write protect. In that case specify 'disable-wp'.
NOTE: This property is not required for slots known to always
connect to eMMC or SDIO cards.
Optional properties:
......
* Toshiba Mobile IO SD/MMC controller
The tmio-mmc driver doesn't probe its devices actively, instead its binding to
devices is managed by either MFD drivers or by the sh_mobile_sdhi platform
driver. Those drivers supply the tmio-mmc driver with platform data, that either
describe hardware capabilities, known to them, or are obtained by them from
their own platform data or from their DT information. In the latter case all
compulsory and any optional properties, common to all SD/MMC drivers, as
described in mmc.txt, can be used. Additionally the following tmio_mmc-specific
optional bindings can be used.
Optional properties:
- toshiba,mmc-wrprotect-disable: write-protect detection is unavailable
When used with Renesas SDHI hardware, the following compatibility strings
configure various model-specific properties:
"renesas,sh7372-sdhi": (default) compatible with SH7372
"renesas,r8a7740-sdhi": compatible with R8A7740: certain MMC/SD commands have to
wait for the interface to become idle.
......@@ -5692,7 +5692,7 @@ S: Maintained
F: drivers/mmc/host/omap.c
OMAP HS MMC SUPPORT
M: Venkatraman S <svenkatr@ti.com>
M: Balaji T K <balajitk@ti.com>
L: linux-mmc@vger.kernel.org
L: linux-omap@vger.kernel.org
S: Maintained
......@@ -6804,6 +6804,14 @@ F: include/linux/dw_dmac.h
F: drivers/dma/dw_dmac_regs.h
F: drivers/dma/dw_dmac.c
SYNOPSYS DESIGNWARE MMC/SD/SDIO DRIVER
M: Seungwon Jeon <tgih.jun@samsung.com>
M: Jaehoon Chung <jh80.chung@samsung.com>
L: linux-mmc@vger.kernel.org
S: Maintained
F: include/linux/mmc/dw_mmc.h
F: drivers/mmc/host/dw_mmc*
TIMEKEEPING, NTP
M: John Stultz <john.stultz@linaro.org>
M: Thomas Gleixner <tglx@linutronix.de>
......
......@@ -151,6 +151,7 @@ slot@0 {
reg = <0>;
bus-width = <4>;
samsung,cd-pinmux-gpio = <&gpc3 2 2 3 3>;
disable-wp;
gpios = <&gpc3 0 2 0 3>, <&gpc3 1 2 0 3>,
<&gpc3 3 2 3 3>, <&gpc3 4 2 3 3>,
<&gpc3 5 2 3 3>, <&gpc3 6 2 3 3>,
......
......@@ -59,6 +59,12 @@ MODULE_ALIAS("mmc:block");
#define INAND_CMD38_ARG_SECTRIM2 0x88
#define MMC_BLK_TIMEOUT_MS (10 * 60 * 1000) /* 10 minute timeout */
#define mmc_req_rel_wr(req) (((req->cmd_flags & REQ_FUA) || \
(req->cmd_flags & REQ_META)) && \
(rq_data_dir(req) == WRITE))
#define PACKED_CMD_VER 0x01
#define PACKED_CMD_WR 0x02
static DEFINE_MUTEX(block_mutex);
/*
......@@ -89,6 +95,7 @@ struct mmc_blk_data {
unsigned int flags;
#define MMC_BLK_CMD23 (1 << 0) /* Can do SET_BLOCK_COUNT for multiblock */
#define MMC_BLK_REL_WR (1 << 1) /* MMC Reliable write support */
#define MMC_BLK_PACKED_CMD (1 << 2) /* MMC packed command support */
unsigned int usage;
unsigned int read_only;
......@@ -113,15 +120,10 @@ struct mmc_blk_data {
static DEFINE_MUTEX(open_lock);
enum mmc_blk_status {
MMC_BLK_SUCCESS = 0,
MMC_BLK_PARTIAL,
MMC_BLK_CMD_ERR,
MMC_BLK_RETRY,
MMC_BLK_ABORT,
MMC_BLK_DATA_ERR,
MMC_BLK_ECC_ERR,
MMC_BLK_NOMEDIUM,
enum {
MMC_PACKED_NR_IDX = -1,
MMC_PACKED_NR_ZERO,
MMC_PACKED_NR_SINGLE,
};
module_param(perdev_minors, int, 0444);
......@@ -131,6 +133,19 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
struct mmc_blk_data *md);
static int get_card_status(struct mmc_card *card, u32 *status, int retries);
static inline void mmc_blk_clear_packed(struct mmc_queue_req *mqrq)
{
struct mmc_packed *packed = mqrq->packed;
BUG_ON(!packed);
mqrq->cmd_type = MMC_PACKED_NONE;
packed->nr_entries = MMC_PACKED_NR_ZERO;
packed->idx_failure = MMC_PACKED_NR_IDX;
packed->retries = 0;
packed->blocks = 0;
}
static struct mmc_blk_data *mmc_blk_get(struct gendisk *disk)
{
struct mmc_blk_data *md;
......@@ -1148,12 +1163,78 @@ static int mmc_blk_err_check(struct mmc_card *card,
if (!brq->data.bytes_xfered)
return MMC_BLK_RETRY;
if (mmc_packed_cmd(mq_mrq->cmd_type)) {
if (unlikely(brq->data.blocks << 9 != brq->data.bytes_xfered))
return MMC_BLK_PARTIAL;
else
return MMC_BLK_SUCCESS;
}
if (blk_rq_bytes(req) != brq->data.bytes_xfered)
return MMC_BLK_PARTIAL;
return MMC_BLK_SUCCESS;
}
static int mmc_blk_packed_err_check(struct mmc_card *card,
struct mmc_async_req *areq)
{
struct mmc_queue_req *mq_rq = container_of(areq, struct mmc_queue_req,
mmc_active);
struct request *req = mq_rq->req;
struct mmc_packed *packed = mq_rq->packed;
int err, check, status;
u8 *ext_csd;
BUG_ON(!packed);
packed->retries--;
check = mmc_blk_err_check(card, areq);
err = get_card_status(card, &status, 0);
if (err) {
pr_err("%s: error %d sending status command\n",
req->rq_disk->disk_name, err);
return MMC_BLK_ABORT;
}
if (status & R1_EXCEPTION_EVENT) {
ext_csd = kzalloc(512, GFP_KERNEL);
if (!ext_csd) {
pr_err("%s: unable to allocate buffer for ext_csd\n",
req->rq_disk->disk_name);
return -ENOMEM;
}
err = mmc_send_ext_csd(card, ext_csd);
if (err) {
pr_err("%s: error %d sending ext_csd\n",
req->rq_disk->disk_name, err);
check = MMC_BLK_ABORT;
goto free;
}
if ((ext_csd[EXT_CSD_EXP_EVENTS_STATUS] &
EXT_CSD_PACKED_FAILURE) &&
(ext_csd[EXT_CSD_PACKED_CMD_STATUS] &
EXT_CSD_PACKED_GENERIC_ERROR)) {
if (ext_csd[EXT_CSD_PACKED_CMD_STATUS] &
EXT_CSD_PACKED_INDEXED_ERROR) {
packed->idx_failure =
ext_csd[EXT_CSD_PACKED_FAILURE_INDEX] - 1;
check = MMC_BLK_PARTIAL;
}
pr_err("%s: packed cmd failed, nr %u, sectors %u, "
"failure index: %d\n",
req->rq_disk->disk_name, packed->nr_entries,
packed->blocks, packed->idx_failure);
}
free:
kfree(ext_csd);
}
return check;
}
static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
struct mmc_card *card,
int disable_multi,
......@@ -1308,10 +1389,221 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
mmc_queue_bounce_pre(mqrq);
}
static inline u8 mmc_calc_packed_hdr_segs(struct request_queue *q,
struct mmc_card *card)
{
unsigned int hdr_sz = mmc_large_sector(card) ? 4096 : 512;
unsigned int max_seg_sz = queue_max_segment_size(q);
unsigned int len, nr_segs = 0;
do {
len = min(hdr_sz, max_seg_sz);
hdr_sz -= len;
nr_segs++;
} while (hdr_sz);
return nr_segs;
}
static u8 mmc_blk_prep_packed_list(struct mmc_queue *mq, struct request *req)
{
struct request_queue *q = mq->queue;
struct mmc_card *card = mq->card;
struct request *cur = req, *next = NULL;
struct mmc_blk_data *md = mq->data;
struct mmc_queue_req *mqrq = mq->mqrq_cur;
bool en_rel_wr = card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN;
unsigned int req_sectors = 0, phys_segments = 0;
unsigned int max_blk_count, max_phys_segs;
bool put_back = true;
u8 max_packed_rw = 0;
u8 reqs = 0;
if (!(md->flags & MMC_BLK_PACKED_CMD))
goto no_packed;
if ((rq_data_dir(cur) == WRITE) &&
mmc_host_packed_wr(card->host))
max_packed_rw = card->ext_csd.max_packed_writes;
if (max_packed_rw == 0)
goto no_packed;
if (mmc_req_rel_wr(cur) &&
(md->flags & MMC_BLK_REL_WR) && !en_rel_wr)
goto no_packed;
if (mmc_large_sector(card) &&
!IS_ALIGNED(blk_rq_sectors(cur), 8))
goto no_packed;
mmc_blk_clear_packed(mqrq);
max_blk_count = min(card->host->max_blk_count,
card->host->max_req_size >> 9);
if (unlikely(max_blk_count > 0xffff))
max_blk_count = 0xffff;
max_phys_segs = queue_max_segments(q);
req_sectors += blk_rq_sectors(cur);
phys_segments += cur->nr_phys_segments;
if (rq_data_dir(cur) == WRITE) {
req_sectors += mmc_large_sector(card) ? 8 : 1;
phys_segments += mmc_calc_packed_hdr_segs(q, card);
}
do {
if (reqs >= max_packed_rw - 1) {
put_back = false;
break;
}
spin_lock_irq(q->queue_lock);
next = blk_fetch_request(q);
spin_unlock_irq(q->queue_lock);
if (!next) {
put_back = false;
break;
}
if (mmc_large_sector(card) &&
!IS_ALIGNED(blk_rq_sectors(next), 8))
break;
if (next->cmd_flags & REQ_DISCARD ||
next->cmd_flags & REQ_FLUSH)
break;
if (rq_data_dir(cur) != rq_data_dir(next))
break;
if (mmc_req_rel_wr(next) &&
(md->flags & MMC_BLK_REL_WR) && !en_rel_wr)
break;
req_sectors += blk_rq_sectors(next);
if (req_sectors > max_blk_count)
break;
phys_segments += next->nr_phys_segments;
if (phys_segments > max_phys_segs)
break;
list_add_tail(&next->queuelist, &mqrq->packed->list);
cur = next;
reqs++;
} while (1);
if (put_back) {
spin_lock_irq(q->queue_lock);
blk_requeue_request(q, next);
spin_unlock_irq(q->queue_lock);
}
if (reqs > 0) {
list_add(&req->queuelist, &mqrq->packed->list);
mqrq->packed->nr_entries = ++reqs;
mqrq->packed->retries = reqs;
return reqs;
}
no_packed:
mqrq->cmd_type = MMC_PACKED_NONE;
return 0;
}
static void mmc_blk_packed_hdr_wrq_prep(struct mmc_queue_req *mqrq,
struct mmc_card *card,
struct mmc_queue *mq)
{
struct mmc_blk_request *brq = &mqrq->brq;
struct request *req = mqrq->req;
struct request *prq;
struct mmc_blk_data *md = mq->data;
struct mmc_packed *packed = mqrq->packed;
bool do_rel_wr, do_data_tag;
u32 *packed_cmd_hdr;
u8 hdr_blocks;
u8 i = 1;
BUG_ON(!packed);
mqrq->cmd_type = MMC_PACKED_WRITE;
packed->blocks = 0;
packed->idx_failure = MMC_PACKED_NR_IDX;
packed_cmd_hdr = packed->cmd_hdr;
memset(packed_cmd_hdr, 0, sizeof(packed->cmd_hdr));
packed_cmd_hdr[0] = (packed->nr_entries << 16) |
(PACKED_CMD_WR << 8) | PACKED_CMD_VER;
hdr_blocks = mmc_large_sector(card) ? 8 : 1;
/*
* Argument for each entry of packed group
*/
list_for_each_entry(prq, &packed->list, queuelist) {
do_rel_wr = mmc_req_rel_wr(prq) && (md->flags & MMC_BLK_REL_WR);
do_data_tag = (card->ext_csd.data_tag_unit_size) &&
(prq->cmd_flags & REQ_META) &&
(rq_data_dir(prq) == WRITE) &&
((brq->data.blocks * brq->data.blksz) >=
card->ext_csd.data_tag_unit_size);
/* Argument of CMD23 */
packed_cmd_hdr[(i * 2)] =
(do_rel_wr ? MMC_CMD23_ARG_REL_WR : 0) |
(do_data_tag ? MMC_CMD23_ARG_TAG_REQ : 0) |
blk_rq_sectors(prq);
/* Argument of CMD18 or CMD25 */
packed_cmd_hdr[((i * 2)) + 1] =
mmc_card_blockaddr(card) ?
blk_rq_pos(prq) : blk_rq_pos(prq) << 9;
packed->blocks += blk_rq_sectors(prq);
i++;
}
memset(brq, 0, sizeof(struct mmc_blk_request));
brq->mrq.cmd = &brq->cmd;
brq->mrq.data = &brq->data;
brq->mrq.sbc = &brq->sbc;
brq->mrq.stop = &brq->stop;
brq->sbc.opcode = MMC_SET_BLOCK_COUNT;
brq->sbc.arg = MMC_CMD23_ARG_PACKED | (packed->blocks + hdr_blocks);
brq->sbc.flags = MMC_RSP_R1 | MMC_CMD_AC;
brq->cmd.opcode = MMC_WRITE_MULTIPLE_BLOCK;
brq->cmd.arg = blk_rq_pos(req);
if (!mmc_card_blockaddr(card))
brq->cmd.arg <<= 9;
brq->cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC;
brq->data.blksz = 512;
brq->data.blocks = packed->blocks + hdr_blocks;
brq->data.flags |= MMC_DATA_WRITE;
brq->stop.opcode = MMC_STOP_TRANSMISSION;
brq->stop.arg = 0;
brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC;
mmc_set_data_timeout(&brq->data, card);
brq->data.sg = mqrq->sg;
brq->data.sg_len = mmc_queue_map_sg(mq, mqrq);
mqrq->mmc_active.mrq = &brq->mrq;
mqrq->mmc_active.err_check = mmc_blk_packed_err_check;
mmc_queue_bounce_pre(mqrq);
}
static int mmc_blk_cmd_err(struct mmc_blk_data *md, struct mmc_card *card,
struct mmc_blk_request *brq, struct request *req,
int ret)
{
struct mmc_queue_req *mq_rq;
mq_rq = container_of(brq, struct mmc_queue_req, brq);
/*
* If this is an SD card and we're writing, we can first
* mark the known good sectors as ok.
......@@ -1328,11 +1620,84 @@ static int mmc_blk_cmd_err(struct mmc_blk_data *md, struct mmc_card *card,
ret = blk_end_request(req, 0, blocks << 9);
}
} else {
ret = blk_end_request(req, 0, brq->data.bytes_xfered);
if (!mmc_packed_cmd(mq_rq->cmd_type))
ret = blk_end_request(req, 0, brq->data.bytes_xfered);
}
return ret;
}
static int mmc_blk_end_packed_req(struct mmc_queue_req *mq_rq)
{
struct request *prq;
struct mmc_packed *packed = mq_rq->packed;
int idx = packed->idx_failure, i = 0;
int ret = 0;
BUG_ON(!packed);
while (!list_empty(&packed->list)) {
prq = list_entry_rq(packed->list.next);
if (idx == i) {
/* retry from error index */
packed->nr_entries -= idx;
mq_rq->req = prq;
ret = 1;
if (packed->nr_entries == MMC_PACKED_NR_SINGLE) {
list_del_init(&prq->queuelist);
mmc_blk_clear_packed(mq_rq);
}
return ret;
}
list_del_init(&prq->queuelist);
blk_end_request(prq, 0, blk_rq_bytes(prq));
i++;
}
mmc_blk_clear_packed(mq_rq);
return ret;
}
static void mmc_blk_abort_packed_req(struct mmc_queue_req *mq_rq)
{
struct request *prq;
struct mmc_packed *packed = mq_rq->packed;
BUG_ON(!packed);
while (!list_empty(&packed->list)) {
prq = list_entry_rq(packed->list.next);
list_del_init(&prq->queuelist);
blk_end_request(prq, -EIO, blk_rq_bytes(prq));
}
mmc_blk_clear_packed(mq_rq);
}
static void mmc_blk_revert_packed_req(struct mmc_queue *mq,
struct mmc_queue_req *mq_rq)
{
struct request *prq;
struct request_queue *q = mq->queue;
struct mmc_packed *packed = mq_rq->packed;
BUG_ON(!packed);
while (!list_empty(&packed->list)) {
prq = list_entry_rq(packed->list.prev);
if (prq->queuelist.prev != &packed->list) {
list_del_init(&prq->queuelist);
spin_lock_irq(q->queue_lock);
blk_requeue_request(mq->queue, prq);
spin_unlock_irq(q->queue_lock);
} else {
list_del_init(&prq->queuelist);
}
}
mmc_blk_clear_packed(mq_rq);
}
static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
{
struct mmc_blk_data *md = mq->data;
......@@ -1343,10 +1708,15 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
struct mmc_queue_req *mq_rq;
struct request *req = rqc;
struct mmc_async_req *areq;
const u8 packed_nr = 2;
u8 reqs = 0;
if (!rqc && !mq->mqrq_prev->req)
return 0;
if (rqc)
reqs = mmc_blk_prep_packed_list(mq, rqc);
do {
if (rqc) {
/*
......@@ -1357,15 +1727,24 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
(card->ext_csd.data_sector_size == 4096)) {
pr_err("%s: Transfer size is not 4KB sector size aligned\n",
req->rq_disk->disk_name);
mq_rq = mq->mqrq_cur;
goto cmd_abort;
}
mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq);
if (reqs >= packed_nr)
mmc_blk_packed_hdr_wrq_prep(mq->mqrq_cur,
card, mq);
else
mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq);
areq = &mq->mqrq_cur->mmc_active;
} else
areq = NULL;
areq = mmc_start_req(card->host, areq, (int *) &status);
if (!areq)
if (!areq) {
if (status == MMC_BLK_NEW_REQUEST)
mq->flags |= MMC_QUEUE_NEW_REQUEST;
return 0;
}
mq_rq = container_of(areq, struct mmc_queue_req, mmc_active);
brq = &mq_rq->brq;
......@@ -1380,8 +1759,15 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
* A block was successfully transferred.
*/
mmc_blk_reset_success(md, type);
ret = blk_end_request(req, 0,
if (mmc_packed_cmd(mq_rq->cmd_type)) {
ret = mmc_blk_end_packed_req(mq_rq);
break;
} else {
ret = blk_end_request(req, 0,
brq->data.bytes_xfered);
}
/*
* If the blk_end_request function returns non-zero even
* though all data has been transferred and no errors
......@@ -1414,7 +1800,8 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
err = mmc_blk_reset(md, card->host, type);
if (!err)
break;
if (err == -ENODEV)
if (err == -ENODEV ||
mmc_packed_cmd(mq_rq->cmd_type))
goto cmd_abort;
/* Fall through */
}
......@@ -1438,30 +1825,62 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
break;
case MMC_BLK_NOMEDIUM:
goto cmd_abort;
default:
pr_err("%s: Unhandled return value (%d)",
req->rq_disk->disk_name, status);
goto cmd_abort;
}
if (ret) {
/*
* In case of a incomplete request
* prepare it again and resend.
*/
mmc_blk_rw_rq_prep(mq_rq, card, disable_multi, mq);
mmc_start_req(card->host, &mq_rq->mmc_active, NULL);
if (mmc_packed_cmd(mq_rq->cmd_type)) {
if (!mq_rq->packed->retries)
goto cmd_abort;
mmc_blk_packed_hdr_wrq_prep(mq_rq, card, mq);
mmc_start_req(card->host,
&mq_rq->mmc_active, NULL);
} else {
/*
* In case of a incomplete request
* prepare it again and resend.
*/
mmc_blk_rw_rq_prep(mq_rq, card,
disable_multi, mq);
mmc_start_req(card->host,
&mq_rq->mmc_active, NULL);
}
}
} while (ret);
return 1;
cmd_abort:
if (mmc_card_removed(card))
req->cmd_flags |= REQ_QUIET;
while (ret)
ret = blk_end_request(req, -EIO, blk_rq_cur_bytes(req));
if (mmc_packed_cmd(mq_rq->cmd_type)) {
mmc_blk_abort_packed_req(mq_rq);
} else {
if (mmc_card_removed(card))
req->cmd_flags |= REQ_QUIET;
while (ret)
ret = blk_end_request(req, -EIO,
blk_rq_cur_bytes(req));
}
start_new_req:
if (rqc) {
mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq);
mmc_start_req(card->host, &mq->mqrq_cur->mmc_active, NULL);
if (mmc_card_removed(card)) {
rqc->cmd_flags |= REQ_QUIET;
blk_end_request_all(rqc, -EIO);
} else {
/*
* If current request is packed, it needs to put back.
*/
if (mmc_packed_cmd(mq->mqrq_cur->cmd_type))
mmc_blk_revert_packed_req(mq, mq->mqrq_cur);
mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq);
mmc_start_req(card->host,
&mq->mqrq_cur->mmc_active, NULL);
}
}
return 0;
......@@ -1472,6 +1891,8 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
int ret;
struct mmc_blk_data *md = mq->data;
struct mmc_card *card = md->queue.card;
struct mmc_host *host = card->host;
unsigned long flags;
if (req && !mq->mqrq_prev->req)
/* claim host only for the first request */
......@@ -1486,6 +1907,7 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
goto out;
}
mq->flags &= ~MMC_QUEUE_NEW_REQUEST;
if (req && req->cmd_flags & REQ_DISCARD) {
/* complete ongoing async transfer before issuing discard */
if (card->host->areq)
......@@ -1501,11 +1923,16 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
mmc_blk_issue_rw_rq(mq, NULL);
ret = mmc_blk_issue_flush(mq, req);
} else {
if (!req && host->areq) {
spin_lock_irqsave(&host->context_info.lock, flags);
host->context_info.is_waiting_last_req = true;
spin_unlock_irqrestore(&host->context_info.lock, flags);
}
ret = mmc_blk_issue_rw_rq(mq, req);
}
out:
if (!req)
if (!req && !(mq->flags & MMC_QUEUE_NEW_REQUEST))
/* release host only when there are no more requests */
mmc_release_host(card->host);
return ret;
......@@ -1624,6 +2051,14 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
blk_queue_flush(md->queue.queue, REQ_FLUSH | REQ_FUA);
}
if (mmc_card_mmc(card) &&
(area_type == MMC_BLK_DATA_AREA_MAIN) &&
(md->flags & MMC_BLK_CMD23) &&
card->ext_csd.packed_event_en) {
if (!mmc_packed_init(&md->queue, card))
md->flags |= MMC_BLK_PACKED_CMD;
}
return md;
err_putdisk:
......@@ -1732,6 +2167,8 @@ static void mmc_blk_remove_req(struct mmc_blk_data *md)
/* Then flush out any already in there */
mmc_cleanup_queue(&md->queue);
if (md->flags & MMC_BLK_PACKED_CMD)
mmc_packed_clean(&md->queue);
mmc_blk_put(md);
}
}
......
......@@ -22,7 +22,8 @@
#define MMC_QUEUE_BOUNCESZ 65536
#define MMC_QUEUE_SUSPENDED (1 << 0)
#define MMC_REQ_SPECIAL_MASK (REQ_DISCARD | REQ_FLUSH)
/*
* Prepare a MMC request. This just filters out odd stuff.
......@@ -58,6 +59,7 @@ static int mmc_queue_thread(void *d)
do {
struct request *req = NULL;
struct mmc_queue_req *tmp;
unsigned int cmd_flags = 0;
spin_lock_irq(q->queue_lock);
set_current_state(TASK_INTERRUPTIBLE);
......@@ -67,12 +69,23 @@ static int mmc_queue_thread(void *d)
if (req || mq->mqrq_prev->req) {
set_current_state(TASK_RUNNING);
cmd_flags = req ? req->cmd_flags : 0;
mq->issue_fn(mq, req);
if (mq->flags & MMC_QUEUE_NEW_REQUEST) {
mq->flags &= ~MMC_QUEUE_NEW_REQUEST;
continue; /* fetch again */
}
/*
* Current request becomes previous request
* and vice versa.
* In case of special requests, current request
* has been finished. Do not assign it to previous
* request.
*/
if (cmd_flags & MMC_REQ_SPECIAL_MASK)
mq->mqrq_cur->req = NULL;
mq->mqrq_prev->brq.mrq.data = NULL;
mq->mqrq_prev->req = NULL;
tmp = mq->mqrq_prev;
......@@ -103,6 +116,8 @@ static void mmc_request_fn(struct request_queue *q)
{
struct mmc_queue *mq = q->queuedata;
struct request *req;
unsigned long flags;
struct mmc_context_info *cntx;
if (!mq) {
while ((req = blk_fetch_request(q)) != NULL) {
......@@ -112,7 +127,20 @@ static void mmc_request_fn(struct request_queue *q)
return;
}
if (!mq->mqrq_cur->req && !mq->mqrq_prev->req)
cntx = &mq->card->host->context_info;
if (!mq->mqrq_cur->req && mq->mqrq_prev->req) {
/*
* New MMC request arrived when MMC thread may be
* blocked on the previous request to be complete
* with no current request fetched
*/
spin_lock_irqsave(&cntx->lock, flags);
if (cntx->is_waiting_last_req) {
cntx->is_new_req = true;
wake_up_interruptible(&cntx->wait);
}
spin_unlock_irqrestore(&cntx->lock, flags);
} else if (!mq->mqrq_cur->req && !mq->mqrq_prev->req)
wake_up_process(mq->thread);
}
......@@ -334,6 +362,49 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
}
EXPORT_SYMBOL(mmc_cleanup_queue);
int mmc_packed_init(struct mmc_queue *mq, struct mmc_card *card)
{
struct mmc_queue_req *mqrq_cur = &mq->mqrq[0];
struct mmc_queue_req *mqrq_prev = &mq->mqrq[1];
int ret = 0;
mqrq_cur->packed = kzalloc(sizeof(struct mmc_packed), GFP_KERNEL);
if (!mqrq_cur->packed) {
pr_warn("%s: unable to allocate packed cmd for mqrq_cur\n",
mmc_card_name(card));
ret = -ENOMEM;
goto out;
}
mqrq_prev->packed = kzalloc(sizeof(struct mmc_packed), GFP_KERNEL);
if (!mqrq_prev->packed) {
pr_warn("%s: unable to allocate packed cmd for mqrq_prev\n",
mmc_card_name(card));
kfree(mqrq_cur->packed);
mqrq_cur->packed = NULL;
ret = -ENOMEM;
goto out;
}
INIT_LIST_HEAD(&mqrq_cur->packed->list);
INIT_LIST_HEAD(&mqrq_prev->packed->list);
out:
return ret;
}
void mmc_packed_clean(struct mmc_queue *mq)
{
struct mmc_queue_req *mqrq_cur = &mq->mqrq[0];
struct mmc_queue_req *mqrq_prev = &mq->mqrq[1];
kfree(mqrq_cur->packed);
mqrq_cur->packed = NULL;
kfree(mqrq_prev->packed);
mqrq_prev->packed = NULL;
}
/**
* mmc_queue_suspend - suspend a MMC request queue
* @mq: MMC queue to suspend
......@@ -378,6 +449,41 @@ void mmc_queue_resume(struct mmc_queue *mq)
}
}
static unsigned int mmc_queue_packed_map_sg(struct mmc_queue *mq,
struct mmc_packed *packed,
struct scatterlist *sg,
enum mmc_packed_type cmd_type)
{
struct scatterlist *__sg = sg;
unsigned int sg_len = 0;
struct request *req;
if (mmc_packed_wr(cmd_type)) {
unsigned int hdr_sz = mmc_large_sector(mq->card) ? 4096 : 512;
unsigned int max_seg_sz = queue_max_segment_size(mq->queue);
unsigned int len, remain, offset = 0;
u8 *buf = (u8 *)packed->cmd_hdr;
remain = hdr_sz;
do {
len = min(remain, max_seg_sz);
sg_set_buf(__sg, buf + offset, len);
offset += len;
remain -= len;
(__sg++)->page_link &= ~0x02;
sg_len++;
} while (remain);
}
list_for_each_entry(req, &packed->list, queuelist) {
sg_len += blk_rq_map_sg(mq->queue, req, __sg);
__sg = sg + (sg_len - 1);
(__sg++)->page_link &= ~0x02;
}
sg_mark_end(sg + (sg_len - 1));
return sg_len;
}
/*
* Prepare the sg list(s) to be handed of to the host driver
*/
......@@ -386,14 +492,26 @@ unsigned int mmc_queue_map_sg(struct mmc_queue *mq, struct mmc_queue_req *mqrq)
unsigned int sg_len;
size_t buflen;
struct scatterlist *sg;
enum mmc_packed_type cmd_type;
int i;
if (!mqrq->bounce_buf)
return blk_rq_map_sg(mq->queue, mqrq->req, mqrq->sg);
cmd_type = mqrq->cmd_type;
if (!mqrq->bounce_buf) {
if (mmc_packed_cmd(cmd_type))
return mmc_queue_packed_map_sg(mq, mqrq->packed,
mqrq->sg, cmd_type);
else
return blk_rq_map_sg(mq->queue, mqrq->req, mqrq->sg);
}
BUG_ON(!mqrq->bounce_sg);
sg_len = blk_rq_map_sg(mq->queue, mqrq->req, mqrq->bounce_sg);
if (mmc_packed_cmd(cmd_type))
sg_len = mmc_queue_packed_map_sg(mq, mqrq->packed,
mqrq->bounce_sg, cmd_type);
else
sg_len = blk_rq_map_sg(mq->queue, mqrq->req, mqrq->bounce_sg);
mqrq->bounce_sg_len = sg_len;
......
......@@ -12,6 +12,23 @@ struct mmc_blk_request {
struct mmc_data data;
};
enum mmc_packed_type {
MMC_PACKED_NONE = 0,
MMC_PACKED_WRITE,
};
#define mmc_packed_cmd(type) ((type) != MMC_PACKED_NONE)
#define mmc_packed_wr(type) ((type) == MMC_PACKED_WRITE)
struct mmc_packed {
struct list_head list;
u32 cmd_hdr[1024];
unsigned int blocks;
u8 nr_entries;
u8 retries;
s16 idx_failure;
};
struct mmc_queue_req {
struct request *req;
struct mmc_blk_request brq;
......@@ -20,6 +37,8 @@ struct mmc_queue_req {
struct scatterlist *bounce_sg;
unsigned int bounce_sg_len;
struct mmc_async_req mmc_active;
enum mmc_packed_type cmd_type;
struct mmc_packed *packed;
};
struct mmc_queue {
......@@ -27,6 +46,9 @@ struct mmc_queue {
struct task_struct *thread;
struct semaphore thread_sem;
unsigned int flags;
#define MMC_QUEUE_SUSPENDED (1 << 0)
#define MMC_QUEUE_NEW_REQUEST (1 << 1)
int (*issue_fn)(struct mmc_queue *, struct request *);
void *data;
struct request_queue *queue;
......@@ -46,4 +68,7 @@ extern unsigned int mmc_queue_map_sg(struct mmc_queue *,
extern void mmc_queue_bounce_pre(struct mmc_queue_req *);
extern void mmc_queue_bounce_post(struct mmc_queue_req *);
extern int mmc_packed_init(struct mmc_queue *, struct mmc_card *);
extern void mmc_packed_clean(struct mmc_queue *);
#endif
......@@ -321,6 +321,7 @@ int mmc_add_card(struct mmc_card *card)
#ifdef CONFIG_DEBUG_FS
mmc_add_card_debugfs(card);
#endif
mmc_init_context_info(card->host);
ret = device_add(&card->dev);
if (ret)
......
......@@ -319,11 +319,45 @@ void mmc_start_bkops(struct mmc_card *card, bool from_exception)
}
EXPORT_SYMBOL(mmc_start_bkops);
/*
* mmc_wait_data_done() - done callback for data request
* @mrq: done data request
*
* Wakes up mmc context, passed as a callback to host controller driver
*/
static void mmc_wait_data_done(struct mmc_request *mrq)
{
mrq->host->context_info.is_done_rcv = true;
wake_up_interruptible(&mrq->host->context_info.wait);
}
static void mmc_wait_done(struct mmc_request *mrq)
{
complete(&mrq->completion);
}
/*
*__mmc_start_data_req() - starts data request
* @host: MMC host to start the request
* @mrq: data request to start
*
* Sets the done callback to be called when request is completed by the card.
* Starts data mmc request execution
*/
static int __mmc_start_data_req(struct mmc_host *host, struct mmc_request *mrq)
{
mrq->done = mmc_wait_data_done;
mrq->host = host;
if (mmc_card_removed(host->card)) {
mrq->cmd->error = -ENOMEDIUM;
mmc_wait_data_done(mrq);
return -ENOMEDIUM;
}
mmc_start_request(host, mrq);
return 0;
}
static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
{
init_completion(&mrq->completion);
......@@ -337,6 +371,62 @@ static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
return 0;
}
/*
* mmc_wait_for_data_req_done() - wait for request completed
* @host: MMC host to prepare the command.
* @mrq: MMC request to wait for
*
* Blocks MMC context till host controller will ack end of data request
* execution or new request notification arrives from the block layer.
* Handles command retries.
*
* Returns enum mmc_blk_status after checking errors.
*/
static int mmc_wait_for_data_req_done(struct mmc_host *host,
struct mmc_request *mrq,
struct mmc_async_req *next_req)
{
struct mmc_command *cmd;
struct mmc_context_info *context_info = &host->context_info;
int err;
unsigned long flags;
while (1) {
wait_event_interruptible(context_info->wait,
(context_info->is_done_rcv ||
context_info->is_new_req));
spin_lock_irqsave(&context_info->lock, flags);
context_info->is_waiting_last_req = false;
spin_unlock_irqrestore(&context_info->lock, flags);
if (context_info->is_done_rcv) {
context_info->is_done_rcv = false;
context_info->is_new_req = false;
cmd = mrq->cmd;
if (!cmd->error || !cmd->retries ||
mmc_card_removed(host->card)) {
err = host->areq->err_check(host->card,
host->areq);
break; /* return err */
} else {
pr_info("%s: req failed (CMD%u): %d, retrying...\n",
mmc_hostname(host),
cmd->opcode, cmd->error);
cmd->retries--;
cmd->error = 0;
host->ops->request(host, mrq);
continue; /* wait for done/new event again */
}
} else if (context_info->is_new_req) {
context_info->is_new_req = false;
if (!next_req) {
err = MMC_BLK_NEW_REQUEST;
break; /* return err */
}
}
}
return err;
}
static void mmc_wait_for_req_done(struct mmc_host *host,
struct mmc_request *mrq)
{
......@@ -426,8 +516,16 @@ struct mmc_async_req *mmc_start_req(struct mmc_host *host,
mmc_pre_req(host, areq->mrq, !host->areq);
if (host->areq) {
mmc_wait_for_req_done(host, host->areq->mrq);
err = host->areq->err_check(host->card, host->areq);
err = mmc_wait_for_data_req_done(host, host->areq->mrq, areq);
if (err == MMC_BLK_NEW_REQUEST) {
if (error)
*error = err;
/*
* The previous request was not completed,
* nothing to return
*/
return NULL;
}
/*
* Check BKOPS urgency for each R1 response
*/
......@@ -439,14 +537,14 @@ struct mmc_async_req *mmc_start_req(struct mmc_host *host,
}
if (!err && areq)
start_err = __mmc_start_req(host, areq->mrq);
start_err = __mmc_start_data_req(host, areq->mrq);
if (host->areq)
mmc_post_req(host, host->areq->mrq, 0);
/* Cancel a prepared request if it was not started. */
if ((err || start_err) && areq)
mmc_post_req(host, areq->mrq, -EINVAL);
mmc_post_req(host, areq->mrq, -EINVAL);
if (err)
host->areq = NULL;
......@@ -1137,7 +1235,7 @@ int mmc_regulator_set_ocr(struct mmc_host *mmc,
*/
voltage = regulator_get_voltage(supply);
if (regulator_count_voltages(supply) == 1)
if (!regulator_can_change_voltage(supply))
min_uV = max_uV = voltage;
if (voltage < 0)
......@@ -1219,10 +1317,30 @@ u32 mmc_select_voltage(struct mmc_host *host, u32 ocr)
return ocr;
}
int mmc_set_signal_voltage(struct mmc_host *host, int signal_voltage, bool cmd11)
int __mmc_set_signal_voltage(struct mmc_host *host, int signal_voltage)
{
int err = 0;
int old_signal_voltage = host->ios.signal_voltage;
host->ios.signal_voltage = signal_voltage;
if (host->ops->start_signal_voltage_switch) {
mmc_host_clk_hold(host);
err = host->ops->start_signal_voltage_switch(host, &host->ios);
mmc_host_clk_release(host);
}
if (err)
host->ios.signal_voltage = old_signal_voltage;
return err;
}
int mmc_set_signal_voltage(struct mmc_host *host, int signal_voltage)
{
struct mmc_command cmd = {0};
int err = 0;
u32 clock;
BUG_ON(!host);
......@@ -1230,27 +1348,81 @@ int mmc_set_signal_voltage(struct mmc_host *host, int signal_voltage, bool cmd11
* Send CMD11 only if the request is to switch the card to
* 1.8V signalling.
*/
if ((signal_voltage != MMC_SIGNAL_VOLTAGE_330) && cmd11) {
cmd.opcode = SD_SWITCH_VOLTAGE;
cmd.arg = 0;
cmd.flags = MMC_RSP_R1 | MMC_CMD_AC;
if (signal_voltage == MMC_SIGNAL_VOLTAGE_330)
return __mmc_set_signal_voltage(host, signal_voltage);
err = mmc_wait_for_cmd(host, &cmd, 0);
if (err)
return err;
/*
* If we cannot switch voltages, return failure so the caller
* can continue without UHS mode
*/
if (!host->ops->start_signal_voltage_switch)
return -EPERM;
if (!host->ops->card_busy)
pr_warning("%s: cannot verify signal voltage switch\n",
mmc_hostname(host));
cmd.opcode = SD_SWITCH_VOLTAGE;
cmd.arg = 0;
cmd.flags = MMC_RSP_R1 | MMC_CMD_AC;
if (!mmc_host_is_spi(host) && (cmd.resp[0] & R1_ERROR))
return -EIO;
err = mmc_wait_for_cmd(host, &cmd, 0);
if (err)
return err;
if (!mmc_host_is_spi(host) && (cmd.resp[0] & R1_ERROR))
return -EIO;
mmc_host_clk_hold(host);
/*
* The card should drive cmd and dat[0:3] low immediately
* after the response of cmd11, but wait 1 ms to be sure
*/
mmc_delay(1);
if (host->ops->card_busy && !host->ops->card_busy(host)) {
err = -EAGAIN;
goto power_cycle;
}
/*
* During a signal voltage level switch, the clock must be gated
* for 5 ms according to the SD spec
*/
clock = host->ios.clock;
host->ios.clock = 0;
mmc_set_ios(host);
host->ios.signal_voltage = signal_voltage;
if (__mmc_set_signal_voltage(host, signal_voltage)) {
/*
* Voltages may not have been switched, but we've already
* sent CMD11, so a power cycle is required anyway
*/
err = -EAGAIN;
goto power_cycle;
}
if (host->ops->start_signal_voltage_switch) {
mmc_host_clk_hold(host);
err = host->ops->start_signal_voltage_switch(host, &host->ios);
mmc_host_clk_release(host);
/* Keep clock gated for at least 5 ms */
mmc_delay(5);
host->ios.clock = clock;
mmc_set_ios(host);
/* Wait for at least 1 ms according to spec */
mmc_delay(1);
/*
* Failure to switch is indicated by the card holding
* dat[0:3] low
*/
if (host->ops->card_busy && host->ops->card_busy(host))
err = -EAGAIN;
power_cycle:
if (err) {
pr_debug("%s: Signal voltage switch failed, "
"power cycling card\n", mmc_hostname(host));
mmc_power_cycle(host);
}
mmc_host_clk_release(host);
return err;
}
......@@ -1314,7 +1486,7 @@ static void mmc_power_up(struct mmc_host *host)
mmc_set_ios(host);
/* Set signal voltage to 3.3V */
mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330, false);
__mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330);
/*
* This delay should be sufficient to allow the power supply
......@@ -1372,6 +1544,14 @@ void mmc_power_off(struct mmc_host *host)
mmc_host_clk_release(host);
}
void mmc_power_cycle(struct mmc_host *host)
{
mmc_power_off(host);
/* Wait at least 1 ms according to SD spec */
mmc_delay(1);
mmc_power_up(host);
}
/*
* Cleanup when the last reference to the bus operator is dropped.
*/
......@@ -2388,6 +2568,7 @@ EXPORT_SYMBOL(mmc_flush_cache);
* Turn the cache ON/OFF.
* Turning the cache OFF shall trigger flushing of the data
* to the non-volatile storage.
* This function should be called with host claimed
*/
int mmc_cache_ctrl(struct mmc_host *host, u8 enable)
{
......@@ -2399,7 +2580,6 @@ int mmc_cache_ctrl(struct mmc_host *host, u8 enable)
mmc_card_is_removable(host))
return err;
mmc_claim_host(host);
if (card && mmc_card_mmc(card) &&
(card->ext_csd.cache_size > 0)) {
enable = !!enable;
......@@ -2417,7 +2597,6 @@ int mmc_cache_ctrl(struct mmc_host *host, u8 enable)
card->ext_csd.cache_ctrl = enable;
}
}
mmc_release_host(host);
return err;
}
......@@ -2436,10 +2615,6 @@ int mmc_suspend_host(struct mmc_host *host)
cancel_delayed_work(&host->detect);
mmc_flush_scheduled_work();
err = mmc_cache_ctrl(host, 0);
if (err)
goto out;
mmc_bus_get(host);
if (host->bus_ops && !host->bus_dead) {
if (host->bus_ops->suspend) {
......@@ -2581,6 +2756,23 @@ int mmc_pm_notify(struct notifier_block *notify_block,
}
#endif
/**
* mmc_init_context_info() - init synchronization context
* @host: mmc host
*
* Init struct context_info needed to implement asynchronous
* request mechanism, used by mmc core, host driver and mmc requests
* supplier.
*/
void mmc_init_context_info(struct mmc_host *host)
{
spin_lock_init(&host->context_info.lock);
host->context_info.is_new_req = false;
host->context_info.is_done_rcv = false;
host->context_info.is_waiting_last_req = false;
init_waitqueue_head(&host->context_info.wait);
}
static int __init mmc_init(void)
{
int ret;
......
......@@ -40,11 +40,12 @@ void mmc_set_ungated(struct mmc_host *host);
void mmc_set_bus_mode(struct mmc_host *host, unsigned int mode);
void mmc_set_bus_width(struct mmc_host *host, unsigned int width);
u32 mmc_select_voltage(struct mmc_host *host, u32 ocr);
int mmc_set_signal_voltage(struct mmc_host *host, int signal_voltage,
bool cmd11);
int mmc_set_signal_voltage(struct mmc_host *host, int signal_voltage);
int __mmc_set_signal_voltage(struct mmc_host *host, int signal_voltage);
void mmc_set_timing(struct mmc_host *host, unsigned int timing);
void mmc_set_driver_type(struct mmc_host *host, unsigned int drv_type);
void mmc_power_off(struct mmc_host *host);
void mmc_power_cycle(struct mmc_host *host);
static inline void mmc_delay(unsigned int ms)
{
......@@ -76,5 +77,6 @@ void mmc_remove_host_debugfs(struct mmc_host *host);
void mmc_add_card_debugfs(struct mmc_card *card);
void mmc_remove_card_debugfs(struct mmc_card *card);
void mmc_init_context_info(struct mmc_host *host);
#endif
......@@ -15,6 +15,8 @@
#include <linux/device.h>
#include <linux/err.h>
#include <linux/idr.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/pagemap.h>
#include <linux/export.h>
#include <linux/leds.h>
......@@ -23,6 +25,7 @@
#include <linux/mmc/host.h>
#include <linux/mmc/card.h>
#include <linux/mmc/slot-gpio.h>
#include "core.h"
#include "host.h"
......@@ -294,6 +297,126 @@ static inline void mmc_host_clk_sysfs_init(struct mmc_host *host)
#endif
/**
* mmc_of_parse() - parse host's device-tree node
* @host: host whose node should be parsed.
*
* To keep the rest of the MMC subsystem unaware of whether DT has been
* used to to instantiate and configure this host instance or not, we
* parse the properties and set respective generic mmc-host flags and
* parameters.
*/
void mmc_of_parse(struct mmc_host *host)
{
struct device_node *np;
u32 bus_width;
bool explicit_inv_wp, gpio_inv_wp = false;
enum of_gpio_flags flags;
int len, ret, gpio;
if (!host->parent || !host->parent->of_node)
return;
np = host->parent->of_node;
/* "bus-width" is translated to MMC_CAP_*_BIT_DATA flags */
if (of_property_read_u32(np, "bus-width", &bus_width) < 0) {
dev_dbg(host->parent,
"\"bus-width\" property is missing, assuming 1 bit.\n");
bus_width = 1;
}
switch (bus_width) {
case 8:
host->caps |= MMC_CAP_8_BIT_DATA;
/* Hosts capable of 8-bit transfers can also do 4 bits */
case 4:
host->caps |= MMC_CAP_4_BIT_DATA;
break;
case 1:
break;
default:
dev_err(host->parent,
"Invalid \"bus-width\" value %ud!\n", bus_width);
}
/* f_max is obtained from the optional "max-frequency" property */
of_property_read_u32(np, "max-frequency", &host->f_max);
/*
* Configure CD and WP pins. They are both by default active low to
* match the SDHCI spec. If GPIOs are provided for CD and / or WP, the
* mmc-gpio helpers are used to attach, configure and use them. If
* polarity inversion is specified in DT, one of MMC_CAP2_CD_ACTIVE_HIGH
* and MMC_CAP2_RO_ACTIVE_HIGH capability-2 flags is set. If the
* "broken-cd" property is provided, the MMC_CAP_NEEDS_POLL capability
* is set. If the "non-removable" property is found, the
* MMC_CAP_NONREMOVABLE capability is set and no card-detection
* configuration is performed.
*/
/* Parse Card Detection */
if (of_find_property(np, "non-removable", &len)) {
host->caps |= MMC_CAP_NONREMOVABLE;
} else {
bool explicit_inv_cd, gpio_inv_cd = false;
explicit_inv_cd = of_property_read_bool(np, "cd-inverted");
if (of_find_property(np, "broken-cd", &len))
host->caps |= MMC_CAP_NEEDS_POLL;
gpio = of_get_named_gpio_flags(np, "cd-gpios", 0, &flags);
if (gpio_is_valid(gpio)) {
if (!(flags & OF_GPIO_ACTIVE_LOW))
gpio_inv_cd = true;
ret = mmc_gpio_request_cd(host, gpio);
if (ret < 0)
dev_err(host->parent,
"Failed to request CD GPIO #%d: %d!\n",
gpio, ret);
else
dev_info(host->parent, "Got CD GPIO #%d.\n",
gpio);
}
if (explicit_inv_cd ^ gpio_inv_cd)
host->caps2 |= MMC_CAP2_CD_ACTIVE_HIGH;
}
/* Parse Write Protection */
explicit_inv_wp = of_property_read_bool(np, "wp-inverted");
gpio = of_get_named_gpio_flags(np, "wp-gpios", 0, &flags);
if (gpio_is_valid(gpio)) {
if (!(flags & OF_GPIO_ACTIVE_LOW))
gpio_inv_wp = true;
ret = mmc_gpio_request_ro(host, gpio);
if (ret < 0)
dev_err(host->parent,
"Failed to request WP GPIO: %d!\n", ret);
}
if (explicit_inv_wp ^ gpio_inv_wp)
host->caps2 |= MMC_CAP2_RO_ACTIVE_HIGH;
if (of_find_property(np, "cap-sd-highspeed", &len))
host->caps |= MMC_CAP_SD_HIGHSPEED;
if (of_find_property(np, "cap-mmc-highspeed", &len))
host->caps |= MMC_CAP_MMC_HIGHSPEED;
if (of_find_property(np, "cap-power-off-card", &len))
host->caps |= MMC_CAP_POWER_OFF_CARD;
if (of_find_property(np, "cap-sdio-irq", &len))
host->caps |= MMC_CAP_SDIO_IRQ;
if (of_find_property(np, "keep-power-in-suspend", &len))
host->pm_caps |= MMC_PM_KEEP_POWER;
if (of_find_property(np, "enable-sdio-wakeup", &len))
host->pm_caps |= MMC_PM_WAKE_SDIO_IRQ;
}
EXPORT_SYMBOL(mmc_of_parse);
/**
* mmc_alloc_host - initialise the per-host structure.
* @extra: sizeof private data structure
......
......@@ -496,7 +496,7 @@ static int mmc_read_ext_csd(struct mmc_card *card, u8 *ext_csd)
* RPMB regions are defined in multiples of 128K.
*/
card->ext_csd.raw_rpmb_size_mult = ext_csd[EXT_CSD_RPMB_MULT];
if (ext_csd[EXT_CSD_RPMB_MULT]) {
if (ext_csd[EXT_CSD_RPMB_MULT] && mmc_host_cmd23(card->host)) {
mmc_part_add(card, ext_csd[EXT_CSD_RPMB_MULT] << 17,
EXT_CSD_PART_CONFIG_ACC_RPMB,
"rpmb", 0, false,
......@@ -538,6 +538,11 @@ static int mmc_read_ext_csd(struct mmc_card *card, u8 *ext_csd)
} else {
card->ext_csd.data_tag_unit_size = 0;
}
card->ext_csd.max_packed_writes =
ext_csd[EXT_CSD_MAX_PACKED_WRITES];
card->ext_csd.max_packed_reads =
ext_csd[EXT_CSD_MAX_PACKED_READS];
} else {
card->ext_csd.data_sector_size = 512;
}
......@@ -769,11 +774,11 @@ static int mmc_select_hs200(struct mmc_card *card)
if (card->ext_csd.card_type & EXT_CSD_CARD_TYPE_SDR_1_2V &&
host->caps2 & MMC_CAP2_HS200_1_2V_SDR)
err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_120, 0);
err = __mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_120);
if (err && card->ext_csd.card_type & EXT_CSD_CARD_TYPE_SDR_1_8V &&
host->caps2 & MMC_CAP2_HS200_1_8V_SDR)
err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180, 0);
err = __mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180);
/* If fails try again during next card power cycle */
if (err)
......@@ -1221,8 +1226,8 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
* WARNING: eMMC rules are NOT the same as SD DDR
*/
if (ddr == MMC_1_2V_DDR_MODE) {
err = mmc_set_signal_voltage(host,
MMC_SIGNAL_VOLTAGE_120, 0);
err = __mmc_set_signal_voltage(host,
MMC_SIGNAL_VOLTAGE_120);
if (err)
goto err;
}
......@@ -1275,6 +1280,29 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
}
}
/*
* The mandatory minimum values are defined for packed command.
* read: 5, write: 3
*/
if (card->ext_csd.max_packed_writes >= 3 &&
card->ext_csd.max_packed_reads >= 5 &&
host->caps2 & MMC_CAP2_PACKED_CMD) {
err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL,
EXT_CSD_EXP_EVENTS_CTRL,
EXT_CSD_PACKED_EVENT_EN,
card->ext_csd.generic_cmd6_time);
if (err && err != -EBADMSG)
goto free_card;
if (err) {
pr_warn("%s: Enabling packed event failed\n",
mmc_hostname(card->host));
card->ext_csd.packed_event_en = 0;
err = 0;
} else {
card->ext_csd.packed_event_en = 1;
}
}
if (!oldcard)
host->card = card;
......@@ -1379,6 +1407,11 @@ static int mmc_suspend(struct mmc_host *host)
BUG_ON(!host->card);
mmc_claim_host(host);
err = mmc_cache_ctrl(host, 0);
if (err)
goto out;
if (mmc_can_poweroff_notify(host->card))
err = mmc_poweroff_notify(host->card, EXT_CSD_POWER_OFF_SHORT);
else if (mmc_card_can_sleep(host))
......@@ -1386,8 +1419,9 @@ static int mmc_suspend(struct mmc_host *host)
else if (!mmc_host_is_spi(host))
err = mmc_deselect_cards(host);
host->card->state &= ~(MMC_STATE_HIGHSPEED | MMC_STATE_HIGHSPEED_200);
mmc_release_host(host);
out:
mmc_release_host(host);
return err;
}
......
......@@ -363,6 +363,7 @@ int mmc_send_ext_csd(struct mmc_card *card, u8 *ext_csd)
return mmc_send_cxd_data(card, card->host, MMC_SEND_EXT_CSD,
ext_csd, 512);
}
EXPORT_SYMBOL_GPL(mmc_send_ext_csd);
int mmc_spi_read_ocr(struct mmc_host *host, int highcap, u32 *ocrp)
{
......
......@@ -444,8 +444,7 @@ static void sd_update_bus_speed_mode(struct mmc_card *card)
* If the host doesn't support any of the UHS-I modes, fallback on
* default speed.
*/
if (!(card->host->caps & (MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25 |
MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR104 | MMC_CAP_UHS_DDR50))) {
if (!mmc_host_uhs(card->host)) {
card->sd_bus_speed = 0;
return;
}
......@@ -713,6 +712,14 @@ int mmc_sd_get_cid(struct mmc_host *host, u32 ocr, u32 *cid, u32 *rocr)
{
int err;
u32 max_current;
int retries = 10;
try_again:
if (!retries) {
ocr &= ~SD_OCR_S18R;
pr_warning("%s: Skipping voltage switch\n",
mmc_hostname(host));
}
/*
* Since we're changing the OCR value, we seem to
......@@ -734,10 +741,10 @@ int mmc_sd_get_cid(struct mmc_host *host, u32 ocr, u32 *cid, u32 *rocr)
/*
* If the host supports one of UHS-I modes, request the card
* to switch to 1.8V signaling level.
* to switch to 1.8V signaling level. If the card has failed
* repeatedly to switch however, skip this.
*/
if (host->caps & (MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25 |
MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR104 | MMC_CAP_UHS_DDR50))
if (retries && mmc_host_uhs(host))
ocr |= SD_OCR_S18R;
/*
......@@ -748,7 +755,6 @@ int mmc_sd_get_cid(struct mmc_host *host, u32 ocr, u32 *cid, u32 *rocr)
if (max_current > 150)
ocr |= SD_OCR_XPC;
try_again:
err = mmc_send_app_op_cond(host, ocr, rocr);
if (err)
return err;
......@@ -759,9 +765,12 @@ int mmc_sd_get_cid(struct mmc_host *host, u32 ocr, u32 *cid, u32 *rocr)
*/
if (!mmc_host_is_spi(host) && rocr &&
((*rocr & 0x41000000) == 0x41000000)) {
err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180, true);
if (err) {
ocr &= ~SD_OCR_S18R;
err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180);
if (err == -EAGAIN) {
retries--;
goto try_again;
} else if (err) {
retries = 0;
goto try_again;
}
}
......@@ -960,16 +969,6 @@ static int mmc_sd_init_card(struct mmc_host *host, u32 ocr,
/* Card is an ultra-high-speed card */
mmc_card_set_uhs(card);
/*
* Since initialization is now complete, enable preset
* value registers for UHS-I cards.
*/
if (host->ops->enable_preset_value) {
mmc_host_clk_hold(card->host);
host->ops->enable_preset_value(host, true);
mmc_host_clk_release(card->host);
}
} else {
/*
* Attempt to change to high-speed (if supported)
......@@ -1148,13 +1147,6 @@ int mmc_attach_sd(struct mmc_host *host)
BUG_ON(!host);
WARN_ON(!host->claimed);
/* Disable preset value enable if already set since last time */
if (host->ops->enable_preset_value) {
mmc_host_clk_hold(host);
host->ops->enable_preset_value(host, false);
mmc_host_clk_release(host);
}
err = mmc_send_app_op_cond(host, 0, &ocr);
if (err)
return err;
......
......@@ -157,10 +157,7 @@ static int sdio_read_cccr(struct mmc_card *card, u32 ocr)
if (ret)
goto out;
if (card->host->caps &
(MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25 |
MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR104 |
MMC_CAP_UHS_DDR50)) {
if (mmc_host_uhs(card->host)) {
if (data & SDIO_UHS_DDR50)
card->sw_caps.sd3_bus_mode
|= SD_MODE_UHS_DDR50;
......@@ -478,8 +475,7 @@ static int sdio_set_bus_speed_mode(struct mmc_card *card)
* If the host doesn't support any of the UHS-I modes, fallback on
* default speed.
*/
if (!(card->host->caps & (MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25 |
MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR104 | MMC_CAP_UHS_DDR50)))
if (!mmc_host_uhs(card->host))
return 0;
bus_speed = SDIO_SPEED_SDR12;
......@@ -489,23 +485,27 @@ static int sdio_set_bus_speed_mode(struct mmc_card *card)
bus_speed = SDIO_SPEED_SDR104;
timing = MMC_TIMING_UHS_SDR104;
card->sw_caps.uhs_max_dtr = UHS_SDR104_MAX_DTR;
card->sd_bus_speed = UHS_SDR104_BUS_SPEED;
} else if ((card->host->caps & MMC_CAP_UHS_DDR50) &&
(card->sw_caps.sd3_bus_mode & SD_MODE_UHS_DDR50)) {
bus_speed = SDIO_SPEED_DDR50;
timing = MMC_TIMING_UHS_DDR50;
card->sw_caps.uhs_max_dtr = UHS_DDR50_MAX_DTR;
card->sd_bus_speed = UHS_DDR50_BUS_SPEED;
} else if ((card->host->caps & (MMC_CAP_UHS_SDR104 |
MMC_CAP_UHS_SDR50)) && (card->sw_caps.sd3_bus_mode &
SD_MODE_UHS_SDR50)) {
bus_speed = SDIO_SPEED_SDR50;
timing = MMC_TIMING_UHS_SDR50;
card->sw_caps.uhs_max_dtr = UHS_SDR50_MAX_DTR;
card->sd_bus_speed = UHS_SDR50_BUS_SPEED;
} else if ((card->host->caps & (MMC_CAP_UHS_SDR104 |
MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR25)) &&
(card->sw_caps.sd3_bus_mode & SD_MODE_UHS_SDR25)) {
bus_speed = SDIO_SPEED_SDR25;
timing = MMC_TIMING_UHS_SDR25;
card->sw_caps.uhs_max_dtr = UHS_SDR25_MAX_DTR;
card->sd_bus_speed = UHS_SDR25_BUS_SPEED;
} else if ((card->host->caps & (MMC_CAP_UHS_SDR104 |
MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR25 |
MMC_CAP_UHS_SDR12)) && (card->sw_caps.sd3_bus_mode &
......@@ -513,6 +513,7 @@ static int sdio_set_bus_speed_mode(struct mmc_card *card)
bus_speed = SDIO_SPEED_SDR12;
timing = MMC_TIMING_UHS_SDR12;
card->sw_caps.uhs_max_dtr = UHS_SDR12_MAX_DTR;
card->sd_bus_speed = UHS_SDR12_BUS_SPEED;
}
err = mmc_io_rw_direct(card, 0, 0, SDIO_CCCR_SPEED, 0, &speed);
......@@ -583,10 +584,19 @@ static int mmc_sdio_init_card(struct mmc_host *host, u32 ocr,
{
struct mmc_card *card;
int err;
int retries = 10;
BUG_ON(!host);
WARN_ON(!host->claimed);
try_again:
if (!retries) {
pr_warning("%s: Skipping voltage switch\n",
mmc_hostname(host));
ocr &= ~R4_18V_PRESENT;
host->ocr &= ~R4_18V_PRESENT;
}
/*
* Inform the card of the voltage
*/
......@@ -645,14 +655,16 @@ static int mmc_sdio_init_card(struct mmc_host *host, u32 ocr,
* systems that claim 1.8v signalling in fact do not support
* it.
*/
if ((ocr & R4_18V_PRESENT) &&
(host->caps &
(MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25 |
MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR104 |
MMC_CAP_UHS_DDR50))) {
err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180,
true);
if (err) {
if (!powered_resume && (ocr & R4_18V_PRESENT) && mmc_host_uhs(host)) {
err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180);
if (err == -EAGAIN) {
sdio_reset(host);
mmc_go_idle(host);
mmc_send_if_cond(host, host->ocr_avail);
mmc_remove_card(card);
retries--;
goto try_again;
} else if (err) {
ocr &= ~R4_18V_PRESENT;
host->ocr &= ~R4_18V_PRESENT;
}
......@@ -937,10 +949,12 @@ static int mmc_sdio_resume(struct mmc_host *host)
mmc_claim_host(host);
/* No need to reinitialize powered-resumed nonremovable cards */
if (mmc_card_is_removable(host) || !mmc_card_keep_power(host))
if (mmc_card_is_removable(host) || !mmc_card_keep_power(host)) {
sdio_reset(host);
mmc_go_idle(host);
err = mmc_sdio_init_card(host, host->ocr, host->card,
mmc_card_keep_power(host));
else if (mmc_card_keep_power(host) && mmc_card_wake_sdio_irq(host)) {
} else if (mmc_card_keep_power(host) && mmc_card_wake_sdio_irq(host)) {
/* We may have switched to 1-bit mode during suspend */
err = sdio_enable_4bit_bus(host->card);
if (err > 0) {
......@@ -1020,6 +1034,10 @@ static int mmc_sdio_power_restore(struct mmc_host *host)
goto out;
}
if (mmc_host_uhs(host))
/* to query card if 1.8V signalling is supported */
host->ocr |= R4_18V_PRESENT;
ret = mmc_sdio_init_card(host, host->ocr, host->card,
mmc_card_keep_power(host));
if (!ret && host->sdio_irqs)
......@@ -1085,6 +1103,10 @@ int mmc_attach_sdio(struct mmc_host *host)
/*
* Detect and init the card.
*/
if (mmc_host_uhs(host))
/* to query card if 1.8V signalling is supported */
host->ocr |= R4_18V_PRESENT;
err = mmc_sdio_init_card(host, host->ocr, NULL, 0);
if (err) {
if (err == -EAGAIN) {
......
......@@ -92,6 +92,20 @@ int mmc_gpio_get_cd(struct mmc_host *host)
}
EXPORT_SYMBOL(mmc_gpio_get_cd);
/**
* mmc_gpio_request_ro - request a gpio for write-protection
* @host: mmc host
* @gpio: gpio number requested
*
* As devm_* managed functions are used in mmc_gpio_request_ro(), client
* drivers do not need to explicitly call mmc_gpio_free_ro() for freeing up,
* if the requesting and freeing are only needed at probing and unbinding time
* for once. However, if client drivers do something special like runtime
* switching for write-protection, they are responsible for calling
* mmc_gpio_request_ro() and mmc_gpio_free_ro() as a pair on their own.
*
* Returns zero on success, else an error.
*/
int mmc_gpio_request_ro(struct mmc_host *host, unsigned int gpio)
{
struct mmc_gpio *ctx;
......@@ -106,7 +120,8 @@ int mmc_gpio_request_ro(struct mmc_host *host, unsigned int gpio)
ctx = host->slot.handler_priv;
ret = gpio_request_one(gpio, GPIOF_DIR_IN, ctx->ro_label);
ret = devm_gpio_request_one(&host->class_dev, gpio, GPIOF_DIR_IN,
ctx->ro_label);
if (ret < 0)
return ret;
......@@ -116,6 +131,20 @@ int mmc_gpio_request_ro(struct mmc_host *host, unsigned int gpio)
}
EXPORT_SYMBOL(mmc_gpio_request_ro);
/**
* mmc_gpio_request_cd - request a gpio for card-detection
* @host: mmc host
* @gpio: gpio number requested
*
* As devm_* managed functions are used in mmc_gpio_request_cd(), client
* drivers do not need to explicitly call mmc_gpio_free_cd() for freeing up,
* if the requesting and freeing are only needed at probing and unbinding time
* for once. However, if client drivers do something special like runtime
* switching for card-detection, they are responsible for calling
* mmc_gpio_request_cd() and mmc_gpio_free_cd() as a pair on their own.
*
* Returns zero on success, else an error.
*/
int mmc_gpio_request_cd(struct mmc_host *host, unsigned int gpio)
{
struct mmc_gpio *ctx;
......@@ -128,7 +157,8 @@ int mmc_gpio_request_cd(struct mmc_host *host, unsigned int gpio)
ctx = host->slot.handler_priv;
ret = gpio_request_one(gpio, GPIOF_DIR_IN, ctx->cd_label);
ret = devm_gpio_request_one(&host->class_dev, gpio, GPIOF_DIR_IN,
ctx->cd_label);
if (ret < 0)
/*
* don't bother freeing memory. It might still get used by other
......@@ -146,7 +176,8 @@ int mmc_gpio_request_cd(struct mmc_host *host, unsigned int gpio)
irq = -EINVAL;
if (irq >= 0) {
ret = request_threaded_irq(irq, NULL, mmc_gpio_cd_irqt,
ret = devm_request_threaded_irq(&host->class_dev, irq,
NULL, mmc_gpio_cd_irqt,
IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
ctx->cd_label, host);
if (ret < 0)
......@@ -164,6 +195,13 @@ int mmc_gpio_request_cd(struct mmc_host *host, unsigned int gpio)
}
EXPORT_SYMBOL(mmc_gpio_request_cd);
/**
* mmc_gpio_free_ro - free the write-protection gpio
* @host: mmc host
*
* It's provided only for cases that client drivers need to manually free
* up the write-protection gpio requested by mmc_gpio_request_ro().
*/
void mmc_gpio_free_ro(struct mmc_host *host)
{
struct mmc_gpio *ctx = host->slot.handler_priv;
......@@ -175,10 +213,17 @@ void mmc_gpio_free_ro(struct mmc_host *host)
gpio = ctx->ro_gpio;
ctx->ro_gpio = -EINVAL;
gpio_free(gpio);
devm_gpio_free(&host->class_dev, gpio);
}
EXPORT_SYMBOL(mmc_gpio_free_ro);
/**
* mmc_gpio_free_cd - free the card-detection gpio
* @host: mmc host
*
* It's provided only for cases that client drivers need to manually free
* up the card-detection gpio requested by mmc_gpio_request_cd().
*/
void mmc_gpio_free_cd(struct mmc_host *host)
{
struct mmc_gpio *ctx = host->slot.handler_priv;
......@@ -188,13 +233,13 @@ void mmc_gpio_free_cd(struct mmc_host *host)
return;
if (host->slot.cd_irq >= 0) {
free_irq(host->slot.cd_irq, host);
devm_free_irq(&host->class_dev, host->slot.cd_irq, host);
host->slot.cd_irq = -EINVAL;
}
gpio = ctx->cd_gpio;
ctx->cd_gpio = -EINVAL;
gpio_free(gpio);
devm_gpio_free(&host->class_dev, gpio);
}
EXPORT_SYMBOL(mmc_gpio_free_cd);
......@@ -238,6 +238,17 @@ config MMC_SDHCI_S3C_DMA
YMMV.
config MMC_SDHCI_BCM2835
tristate "SDHCI platform support for the BCM2835 SD/MMC Controller"
depends on ARCH_BCM2835
depends on MMC_SDHCI_PLTFM
select MMC_SDHCI_IO_ACCESSORS
help
This selects the BCM2835 SD/MMC controller. If you have a BCM2835
platform with SD or MMC devices, say Y or M here.
If unsure, say N.
config MMC_OMAP
tristate "TI OMAP Multimedia Card Interface support"
depends on ARCH_OMAP
......@@ -361,6 +372,13 @@ config MMC_DAVINCI
If you have an DAVINCI board with a Multimedia Card slot,
say Y or M here. If unsure, say N.
config MMC_GOLDFISH
tristate "goldfish qemu Multimedia Card Interface support"
depends on GOLDFISH
help
This selects the Goldfish Multimedia card Interface emulation
found on the Goldfish Android virtual device emulation.
config MMC_SPI
tristate "MMC/SD/SDIO over SPI"
depends on SPI_MASTER && !HIGHMEM && HAS_DMA
......
......@@ -23,6 +23,7 @@ obj-$(CONFIG_MMC_TIFM_SD) += tifm_sd.o
obj-$(CONFIG_MMC_MSM) += msm_sdcc.o
obj-$(CONFIG_MMC_MVSDIO) += mvsdio.o
obj-$(CONFIG_MMC_DAVINCI) += davinci_mmc.o
obj-$(CONFIG_MMC_GOLDFISH) += android-goldfish.o
obj-$(CONFIG_MMC_SPI) += mmc_spi.o
ifeq ($(CONFIG_OF),y)
obj-$(CONFIG_MMC_SPI) += of_mmc_spi.o
......@@ -58,6 +59,7 @@ obj-$(CONFIG_MMC_SDHCI_DOVE) += sdhci-dove.o
obj-$(CONFIG_MMC_SDHCI_TEGRA) += sdhci-tegra.o
obj-$(CONFIG_MMC_SDHCI_OF_ESDHC) += sdhci-of-esdhc.o
obj-$(CONFIG_MMC_SDHCI_OF_HLWD) += sdhci-of-hlwd.o
obj-$(CONFIG_MMC_SDHCI_BCM2835) += sdhci-bcm2835.o
ifeq ($(CONFIG_CB710_DEBUG),y)
CFLAGS-cb710-mmc += -DDEBUG
......
/*
* Copyright 2007, Google Inc.
* Copyright 2012, Intel Inc.
*
* based on omap.c driver, which was
* Copyright (C) 2004 Nokia Corporation
* Written by Tuukka Tikkanen and Juha Yrjölä <juha.yrjola@nokia.com>
* Misc hacks here and there by Tony Lindgren <tony@atomide.com>
* Other hacks (DMA, SD, etc) by David Brownell
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/major.h>
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/errno.h>
#include <linux/hdreg.h>
#include <linux/kdev_t.h>
#include <linux/blkdev.h>
#include <linux/mutex.h>
#include <linux/scatterlist.h>
#include <linux/mmc/mmc.h>
#include <linux/mmc/sdio.h>
#include <linux/mmc/host.h>
#include <linux/mmc/card.h>
#include <linux/moduleparam.h>
#include <linux/init.h>
#include <linux/ioport.h>
#include <linux/dma-mapping.h>
#include <linux/delay.h>
#include <linux/spinlock.h>
#include <linux/timer.h>
#include <linux/clk.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/scatterlist.h>
#include <asm/types.h>
#include <asm/io.h>
#include <asm/uaccess.h>
#define DRIVER_NAME "goldfish_mmc"
#define BUFFER_SIZE 16384
#define GOLDFISH_MMC_READ(host, addr) (readl(host->reg_base + addr))
#define GOLDFISH_MMC_WRITE(host, addr, x) (writel(x, host->reg_base + addr))
enum {
/* status register */
MMC_INT_STATUS = 0x00,
/* set this to enable IRQ */
MMC_INT_ENABLE = 0x04,
/* set this to specify buffer address */
MMC_SET_BUFFER = 0x08,
/* MMC command number */
MMC_CMD = 0x0C,
/* MMC argument */
MMC_ARG = 0x10,
/* MMC response (or R2 bits 0 - 31) */
MMC_RESP_0 = 0x14,
/* MMC R2 response bits 32 - 63 */
MMC_RESP_1 = 0x18,
/* MMC R2 response bits 64 - 95 */
MMC_RESP_2 = 0x1C,
/* MMC R2 response bits 96 - 127 */
MMC_RESP_3 = 0x20,
MMC_BLOCK_LENGTH = 0x24,
MMC_BLOCK_COUNT = 0x28,
/* MMC state flags */
MMC_STATE = 0x2C,
/* MMC_INT_STATUS bits */
MMC_STAT_END_OF_CMD = 1U << 0,
MMC_STAT_END_OF_DATA = 1U << 1,
MMC_STAT_STATE_CHANGE = 1U << 2,
MMC_STAT_CMD_TIMEOUT = 1U << 3,
/* MMC_STATE bits */
MMC_STATE_INSERTED = 1U << 0,
MMC_STATE_READ_ONLY = 1U << 1,
};
/*
* Command types
*/
#define OMAP_MMC_CMDTYPE_BC 0
#define OMAP_MMC_CMDTYPE_BCR 1
#define OMAP_MMC_CMDTYPE_AC 2
#define OMAP_MMC_CMDTYPE_ADTC 3
struct goldfish_mmc_host {
struct mmc_request *mrq;
struct mmc_command *cmd;
struct mmc_data *data;
struct mmc_host *mmc;
struct device *dev;
unsigned char id; /* 16xx chips have 2 MMC blocks */
void __iomem *virt_base;
unsigned int phys_base;
int irq;
unsigned char bus_mode;
unsigned char hw_bus_mode;
unsigned int sg_len;
unsigned dma_done:1;
unsigned dma_in_use:1;
void __iomem *reg_base;
};
static inline int
goldfish_mmc_cover_is_open(struct goldfish_mmc_host *host)
{
return 0;
}
static ssize_t
goldfish_mmc_show_cover_switch(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct goldfish_mmc_host *host = dev_get_drvdata(dev);
return sprintf(buf, "%s\n", goldfish_mmc_cover_is_open(host) ? "open" :
"closed");
}
static DEVICE_ATTR(cover_switch, S_IRUGO, goldfish_mmc_show_cover_switch, NULL);
static void
goldfish_mmc_start_command(struct goldfish_mmc_host *host, struct mmc_command *cmd)
{
u32 cmdreg;
u32 resptype;
u32 cmdtype;
host->cmd = cmd;
resptype = 0;
cmdtype = 0;
/* Our hardware needs to know exact type */
switch (mmc_resp_type(cmd)) {
case MMC_RSP_NONE:
break;
case MMC_RSP_R1:
case MMC_RSP_R1B:
/* resp 1, 1b, 6, 7 */
resptype = 1;
break;
case MMC_RSP_R2:
resptype = 2;
break;
case MMC_RSP_R3:
resptype = 3;
break;
default:
dev_err(mmc_dev(host->mmc),
"Invalid response type: %04x\n", mmc_resp_type(cmd));
break;
}
if (mmc_cmd_type(cmd) == MMC_CMD_ADTC)
cmdtype = OMAP_MMC_CMDTYPE_ADTC;
else if (mmc_cmd_type(cmd) == MMC_CMD_BC)
cmdtype = OMAP_MMC_CMDTYPE_BC;
else if (mmc_cmd_type(cmd) == MMC_CMD_BCR)
cmdtype = OMAP_MMC_CMDTYPE_BCR;
else
cmdtype = OMAP_MMC_CMDTYPE_AC;
cmdreg = cmd->opcode | (resptype << 8) | (cmdtype << 12);
if (host->bus_mode == MMC_BUSMODE_OPENDRAIN)
cmdreg |= 1 << 6;
if (cmd->flags & MMC_RSP_BUSY)
cmdreg |= 1 << 11;
if (host->data && !(host->data->flags & MMC_DATA_WRITE))
cmdreg |= 1 << 15;
GOLDFISH_MMC_WRITE(host, MMC_ARG, cmd->arg);
GOLDFISH_MMC_WRITE(host, MMC_CMD, cmdreg);
}
static void goldfish_mmc_xfer_done(struct goldfish_mmc_host *host,
struct mmc_data *data)
{
if (host->dma_in_use) {
enum dma_data_direction dma_data_dir;
if (data->flags & MMC_DATA_WRITE)
dma_data_dir = DMA_TO_DEVICE;
else
dma_data_dir = DMA_FROM_DEVICE;
if (dma_data_dir == DMA_FROM_DEVICE) {
/*
* We don't really have DMA, so we need
* to copy from our platform driver buffer
*/
uint8_t *dest = (uint8_t *)sg_virt(data->sg);
memcpy(dest, host->virt_base, data->sg->length);
}
host->data->bytes_xfered += data->sg->length;
dma_unmap_sg(mmc_dev(host->mmc), data->sg, host->sg_len,
dma_data_dir);
}
host->data = NULL;
host->sg_len = 0;
/*
* NOTE: MMC layer will sometimes poll-wait CMD13 next, issuing
* dozens of requests until the card finishes writing data.
* It'd be cheaper to just wait till an EOFB interrupt arrives...
*/
if (!data->stop) {
host->mrq = NULL;
mmc_request_done(host->mmc, data->mrq);
return;
}
goldfish_mmc_start_command(host, data->stop);
}
static void goldfish_mmc_end_of_data(struct goldfish_mmc_host *host,
struct mmc_data *data)
{
if (!host->dma_in_use) {
goldfish_mmc_xfer_done(host, data);
return;
}
if (host->dma_done)
goldfish_mmc_xfer_done(host, data);
}
static void goldfish_mmc_cmd_done(struct goldfish_mmc_host *host,
struct mmc_command *cmd)
{
host->cmd = NULL;
if (cmd->flags & MMC_RSP_PRESENT) {
if (cmd->flags & MMC_RSP_136) {
/* response type 2 */
cmd->resp[3] =
GOLDFISH_MMC_READ(host, MMC_RESP_0);
cmd->resp[2] =
GOLDFISH_MMC_READ(host, MMC_RESP_1);
cmd->resp[1] =
GOLDFISH_MMC_READ(host, MMC_RESP_2);
cmd->resp[0] =
GOLDFISH_MMC_READ(host, MMC_RESP_3);
} else {
/* response types 1, 1b, 3, 4, 5, 6 */
cmd->resp[0] =
GOLDFISH_MMC_READ(host, MMC_RESP_0);
}
}
if (host->data == NULL || cmd->error) {
host->mrq = NULL;
mmc_request_done(host->mmc, cmd->mrq);
}
}
static irqreturn_t goldfish_mmc_irq(int irq, void *dev_id)
{
struct goldfish_mmc_host *host = (struct goldfish_mmc_host *)dev_id;
u16 status;
int end_command = 0;
int end_transfer = 0;
int transfer_error = 0;
int state_changed = 0;
int cmd_timeout = 0;
while ((status = GOLDFISH_MMC_READ(host, MMC_INT_STATUS)) != 0) {
GOLDFISH_MMC_WRITE(host, MMC_INT_STATUS, status);
if (status & MMC_STAT_END_OF_CMD)
end_command = 1;
if (status & MMC_STAT_END_OF_DATA)
end_transfer = 1;
if (status & MMC_STAT_STATE_CHANGE)
state_changed = 1;
if (status & MMC_STAT_CMD_TIMEOUT) {
end_command = 0;
cmd_timeout = 1;
}
}
if (cmd_timeout) {
struct mmc_request *mrq = host->mrq;
mrq->cmd->error = -ETIMEDOUT;
host->mrq = NULL;
mmc_request_done(host->mmc, mrq);
}
if (end_command)
goldfish_mmc_cmd_done(host, host->cmd);
if (transfer_error)
goldfish_mmc_xfer_done(host, host->data);
else if (end_transfer) {
host->dma_done = 1;
goldfish_mmc_end_of_data(host, host->data);
} else if (host->data != NULL) {
/*
* WORKAROUND -- after porting this driver from 2.6 to 3.4,
* during device initialization, cases where host->data is
* non-null but end_transfer is false would occur. Doing
* nothing in such cases results in no further interrupts,
* and initialization failure.
* TODO -- find the real cause.
*/
host->dma_done = 1;
goldfish_mmc_end_of_data(host, host->data);
}
if (state_changed) {
u32 state = GOLDFISH_MMC_READ(host, MMC_STATE);
pr_info("%s: Card detect now %d\n", __func__,
(state & MMC_STATE_INSERTED));
mmc_detect_change(host->mmc, 0);
}
if (!end_command && !end_transfer &&
!transfer_error && !state_changed && !cmd_timeout) {
status = GOLDFISH_MMC_READ(host, MMC_INT_STATUS);
dev_info(mmc_dev(host->mmc),"spurious irq 0x%04x\n", status);
if (status != 0) {
GOLDFISH_MMC_WRITE(host, MMC_INT_STATUS, status);
GOLDFISH_MMC_WRITE(host, MMC_INT_ENABLE, 0);
}
}
return IRQ_HANDLED;
}
static void goldfish_mmc_prepare_data(struct goldfish_mmc_host *host,
struct mmc_request *req)
{
struct mmc_data *data = req->data;
int block_size;
unsigned sg_len;
enum dma_data_direction dma_data_dir;
host->data = data;
if (data == NULL) {
GOLDFISH_MMC_WRITE(host, MMC_BLOCK_LENGTH, 0);
GOLDFISH_MMC_WRITE(host, MMC_BLOCK_COUNT, 0);
host->dma_in_use = 0;
return;
}
block_size = data->blksz;
GOLDFISH_MMC_WRITE(host, MMC_BLOCK_COUNT, data->blocks - 1);
GOLDFISH_MMC_WRITE(host, MMC_BLOCK_LENGTH, block_size - 1);
/*
* Cope with calling layer confusion; it issues "single
* block" writes using multi-block scatterlists.
*/
sg_len = (data->blocks == 1) ? 1 : data->sg_len;
if (data->flags & MMC_DATA_WRITE)
dma_data_dir = DMA_TO_DEVICE;
else
dma_data_dir = DMA_FROM_DEVICE;
host->sg_len = dma_map_sg(mmc_dev(host->mmc), data->sg,
sg_len, dma_data_dir);
host->dma_done = 0;
host->dma_in_use = 1;
if (dma_data_dir == DMA_TO_DEVICE) {
/*
* We don't really have DMA, so we need to copy to our
* platform driver buffer
*/
const uint8_t *src = (uint8_t *)sg_virt(data->sg);
memcpy(host->virt_base, src, data->sg->length);
}
}
static void goldfish_mmc_request(struct mmc_host *mmc, struct mmc_request *req)
{
struct goldfish_mmc_host *host = mmc_priv(mmc);
WARN_ON(host->mrq != NULL);
host->mrq = req;
goldfish_mmc_prepare_data(host, req);
goldfish_mmc_start_command(host, req->cmd);
/*
* This is to avoid accidentally being detected as an SDIO card
* in mmc_attach_sdio().
*/
if (req->cmd->opcode == SD_IO_SEND_OP_COND &&
req->cmd->flags == (MMC_RSP_SPI_R4 | MMC_RSP_R4 | MMC_CMD_BCR))
req->cmd->error = -EINVAL;
}
static void goldfish_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
{
struct goldfish_mmc_host *host = mmc_priv(mmc);
host->bus_mode = ios->bus_mode;
host->hw_bus_mode = host->bus_mode;
}
static int goldfish_mmc_get_ro(struct mmc_host *mmc)
{
uint32_t state;
struct goldfish_mmc_host *host = mmc_priv(mmc);
state = GOLDFISH_MMC_READ(host, MMC_STATE);
return ((state & MMC_STATE_READ_ONLY) != 0);
}
static const struct mmc_host_ops goldfish_mmc_ops = {
.request = goldfish_mmc_request,
.set_ios = goldfish_mmc_set_ios,
.get_ro = goldfish_mmc_get_ro,
};
static int goldfish_mmc_probe(struct platform_device *pdev)
{
struct mmc_host *mmc;
struct goldfish_mmc_host *host = NULL;
struct resource *res;
int ret = 0;
int irq;
dma_addr_t buf_addr;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
irq = platform_get_irq(pdev, 0);
if (res == NULL || irq < 0)
return -ENXIO;
mmc = mmc_alloc_host(sizeof(struct goldfish_mmc_host), &pdev->dev);
if (mmc == NULL) {
ret = -ENOMEM;
goto err_alloc_host_failed;
}
host = mmc_priv(mmc);
host->mmc = mmc;
pr_err("mmc: Mapping %lX to %lX\n", (long)res->start, (long)res->end);
host->reg_base = ioremap(res->start, res->end - res->start + 1);
if (host->reg_base == NULL) {
ret = -ENOMEM;
goto ioremap_failed;
}
host->virt_base = dma_alloc_coherent(&pdev->dev, BUFFER_SIZE,
&buf_addr, GFP_KERNEL);
if (host->virt_base == 0) {
ret = -ENOMEM;
goto dma_alloc_failed;
}
host->phys_base = buf_addr;
host->id = pdev->id;
host->irq = irq;
mmc->ops = &goldfish_mmc_ops;
mmc->f_min = 400000;
mmc->f_max = 24000000;
mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34;
mmc->caps = MMC_CAP_4_BIT_DATA;
/* Use scatterlist DMA to reduce per-transfer costs.
* NOTE max_seg_size assumption that small blocks aren't
* normally used (except e.g. for reading SD registers).
*/
mmc->max_segs = 32;
mmc->max_blk_size = 2048; /* MMC_BLOCK_LENGTH is 11 bits (+1) */
mmc->max_blk_count = 2048; /* MMC_BLOCK_COUNT is 11 bits (+1) */
mmc->max_req_size = BUFFER_SIZE;
mmc->max_seg_size = mmc->max_req_size;
ret = request_irq(host->irq, goldfish_mmc_irq, 0, DRIVER_NAME, host);
if (ret) {
dev_err(&pdev->dev, "Failed IRQ Adding goldfish MMC\n");
goto err_request_irq_failed;
}
host->dev = &pdev->dev;
platform_set_drvdata(pdev, host);
ret = device_create_file(&pdev->dev, &dev_attr_cover_switch);
if (ret)
dev_warn(mmc_dev(host->mmc),
"Unable to create sysfs attributes\n");
GOLDFISH_MMC_WRITE(host, MMC_SET_BUFFER, host->phys_base);
GOLDFISH_MMC_WRITE(host, MMC_INT_ENABLE,
MMC_STAT_END_OF_CMD | MMC_STAT_END_OF_DATA |
MMC_STAT_STATE_CHANGE | MMC_STAT_CMD_TIMEOUT);
mmc_add_host(mmc);
return 0;
err_request_irq_failed:
dma_free_coherent(&pdev->dev, BUFFER_SIZE, host->virt_base,
host->phys_base);
dma_alloc_failed:
iounmap(host->reg_base);
ioremap_failed:
mmc_free_host(host->mmc);
err_alloc_host_failed:
return ret;
}
static int goldfish_mmc_remove(struct platform_device *pdev)
{
struct goldfish_mmc_host *host = platform_get_drvdata(pdev);
platform_set_drvdata(pdev, NULL);
BUG_ON(host == NULL);
mmc_remove_host(host->mmc);
free_irq(host->irq, host);
dma_free_coherent(&pdev->dev, BUFFER_SIZE, host->virt_base, host->phys_base);
iounmap(host->reg_base);
mmc_free_host(host->mmc);
return 0;
}
static struct platform_driver goldfish_mmc_driver = {
.probe = goldfish_mmc_probe,
.remove = goldfish_mmc_remove,
.driver = {
.name = DRIVER_NAME,
},
};
module_platform_driver(goldfish_mmc_driver);
MODULE_LICENSE("GPL v2");
......@@ -175,16 +175,6 @@ static int dw_mci_exynos_setup_bus(struct dw_mci *host,
}
}
gpio = of_get_named_gpio(slot_np, "wp-gpios", 0);
if (gpio_is_valid(gpio)) {
if (devm_gpio_request(host->dev, gpio, "dw-mci-wp"))
dev_info(host->dev, "gpio [%d] request failed\n",
gpio);
} else {
dev_info(host->dev, "wp gpio not available");
host->pdata->quirks |= DW_MCI_QUIRK_NO_WRITE_PROTECT;
}
if (host->pdata->quirks & DW_MCI_QUIRK_BROKEN_CARD_DETECTION)
return 0;
......
......@@ -34,6 +34,7 @@
#include <linux/regulator/consumer.h>
#include <linux/workqueue.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
#include "dw_mmc.h"
......@@ -74,6 +75,8 @@ struct idmac_desc {
* struct dw_mci_slot - MMC slot state
* @mmc: The mmc_host representing this slot.
* @host: The MMC controller this slot is using.
* @quirks: Slot-level quirks (DW_MCI_SLOT_QUIRK_XXX)
* @wp_gpio: If gpio_is_valid() we'll use this to read write protect.
* @ctype: Card type for this slot.
* @mrq: mmc_request currently being processed or waiting to be
* processed, or NULL when the slot is idle.
......@@ -88,6 +91,9 @@ struct dw_mci_slot {
struct mmc_host *mmc;
struct dw_mci *host;
int quirks;
int wp_gpio;
u32 ctype;
struct mmc_request *mrq;
......@@ -825,10 +831,12 @@ static int dw_mci_get_ro(struct mmc_host *mmc)
struct dw_mci_board *brd = slot->host->pdata;
/* Use platform get_ro function, else try on board write protect */
if (brd->quirks & DW_MCI_QUIRK_NO_WRITE_PROTECT)
if (slot->quirks & DW_MCI_SLOT_QUIRK_NO_WRITE_PROTECT)
read_only = 0;
else if (brd->get_ro)
read_only = brd->get_ro(slot->id);
else if (gpio_is_valid(slot->wp_gpio))
read_only = gpio_get_value(slot->wp_gpio);
else
read_only =
mci_readl(slot->host, WRTPRT) & (1 << slot->id) ? 1 : 0;
......@@ -1785,6 +1793,30 @@ static struct device_node *dw_mci_of_find_slot_node(struct device *dev, u8 slot)
return NULL;
}
static struct dw_mci_of_slot_quirks {
char *quirk;
int id;
} of_slot_quirks[] = {
{
.quirk = "disable-wp",
.id = DW_MCI_SLOT_QUIRK_NO_WRITE_PROTECT,
},
};
static int dw_mci_of_get_slot_quirks(struct device *dev, u8 slot)
{
struct device_node *np = dw_mci_of_find_slot_node(dev, slot);
int quirks = 0;
int idx;
/* get quirks */
for (idx = 0; idx < ARRAY_SIZE(of_slot_quirks); idx++)
if (of_get_property(np, of_slot_quirks[idx].quirk, NULL))
quirks |= of_slot_quirks[idx].id;
return quirks;
}
/* find out bus-width for a given slot */
static u32 dw_mci_of_get_bus_wd(struct device *dev, u8 slot)
{
......@@ -1799,7 +1831,34 @@ static u32 dw_mci_of_get_bus_wd(struct device *dev, u8 slot)
" as 1\n");
return bus_wd;
}
/* find the write protect gpio for a given slot; or -1 if none specified */
static int dw_mci_of_get_wp_gpio(struct device *dev, u8 slot)
{
struct device_node *np = dw_mci_of_find_slot_node(dev, slot);
int gpio;
if (!np)
return -EINVAL;
gpio = of_get_named_gpio(np, "wp-gpios", 0);
/* Having a missing entry is valid; return silently */
if (!gpio_is_valid(gpio))
return -EINVAL;
if (devm_gpio_request(dev, gpio, "dw-mci-wp")) {
dev_warn(dev, "gpio [%d] request failed\n", gpio);
return -EINVAL;
}
return gpio;
}
#else /* CONFIG_OF */
static int dw_mci_of_get_slot_quirks(struct device *dev, u8 slot)
{
return 0;
}
static u32 dw_mci_of_get_bus_wd(struct device *dev, u8 slot)
{
return 1;
......@@ -1808,6 +1867,10 @@ static struct device_node *dw_mci_of_find_slot_node(struct device *dev, u8 slot)
{
return NULL;
}
static int dw_mci_of_get_wp_gpio(struct device *dev, u8 slot)
{
return -EINVAL;
}
#endif /* CONFIG_OF */
static int dw_mci_init_slot(struct dw_mci *host, unsigned int id)
......@@ -1828,6 +1891,8 @@ static int dw_mci_init_slot(struct dw_mci *host, unsigned int id)
slot->host = host;
host->slot[id] = slot;
slot->quirks = dw_mci_of_get_slot_quirks(host->dev, slot->id);
mmc->ops = &dw_mci_ops;
mmc->f_min = DIV_ROUND_UP(host->bus_hz, 510);
mmc->f_max = host->bus_hz;
......@@ -1923,6 +1988,8 @@ static int dw_mci_init_slot(struct dw_mci *host, unsigned int id)
else
clear_bit(DW_MMC_CARD_PRESENT, &slot->flags);
slot->wp_gpio = dw_mci_of_get_wp_gpio(host->dev, slot->id);
mmc_add_host(mmc);
#if defined(CONFIG_DEBUG_FS)
......
......@@ -21,7 +21,11 @@
#include <linux/irq.h>
#include <linux/clk.h>
#include <linux/gpio.h>
#include <linux/of_gpio.h>
#include <linux/of_irq.h>
#include <linux/mmc/host.h>
#include <linux/mmc/slot-gpio.h>
#include <linux/pinctrl/consumer.h>
#include <asm/sizes.h>
#include <asm/unaligned.h>
......@@ -51,8 +55,6 @@ struct mvsd_host {
struct mmc_host *mmc;
struct device *dev;
struct clk *clk;
int gpio_card_detect;
int gpio_write_protect;
};
#define mvsd_write(offs, val) writel(val, iobase + (offs))
......@@ -538,13 +540,6 @@ static void mvsd_timeout_timer(unsigned long data)
mmc_request_done(host->mmc, mrq);
}
static irqreturn_t mvsd_card_detect_irq(int irq, void *dev)
{
struct mvsd_host *host = dev;
mmc_detect_change(host->mmc, msecs_to_jiffies(100));
return IRQ_HANDLED;
}
static void mvsd_enable_sdio_irq(struct mmc_host *mmc, int enable)
{
struct mvsd_host *host = mmc_priv(mmc);
......@@ -564,20 +559,6 @@ static void mvsd_enable_sdio_irq(struct mmc_host *mmc, int enable)
spin_unlock_irqrestore(&host->lock, flags);
}
static int mvsd_get_ro(struct mmc_host *mmc)
{
struct mvsd_host *host = mmc_priv(mmc);
if (host->gpio_write_protect)
return gpio_get_value(host->gpio_write_protect);
/*
* Board doesn't support read only detection; let the mmc core
* decide what to do.
*/
return -ENOSYS;
}
static void mvsd_power_up(struct mvsd_host *host)
{
void __iomem *iobase = host->base;
......@@ -674,7 +655,7 @@ static void mvsd_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
static const struct mmc_host_ops mvsd_ops = {
.request = mvsd_request,
.get_ro = mvsd_get_ro,
.get_ro = mmc_gpio_get_ro,
.set_ios = mvsd_set_ios,
.enable_sdio_irq = mvsd_enable_sdio_irq,
};
......@@ -703,17 +684,18 @@ mv_conf_mbus_windows(struct mvsd_host *host,
static int __init mvsd_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct mmc_host *mmc = NULL;
struct mvsd_host *host = NULL;
const struct mvsdio_platform_data *mvsd_data;
const struct mbus_dram_target_info *dram;
struct resource *r;
int ret, irq;
int gpio_card_detect, gpio_write_protect;
struct pinctrl *pinctrl;
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
irq = platform_get_irq(pdev, 0);
mvsd_data = pdev->dev.platform_data;
if (!r || irq < 0 || !mvsd_data)
if (!r || irq < 0)
return -ENXIO;
mmc = mmc_alloc_host(sizeof(struct mvsd_host), &pdev->dev);
......@@ -725,8 +707,43 @@ static int __init mvsd_probe(struct platform_device *pdev)
host = mmc_priv(mmc);
host->mmc = mmc;
host->dev = &pdev->dev;
host->base_clock = mvsd_data->clock / 2;
host->clk = ERR_PTR(-EINVAL);
pinctrl = devm_pinctrl_get_select_default(&pdev->dev);
if (IS_ERR(pinctrl))
dev_warn(&pdev->dev, "no pins associated\n");
/*
* Some non-DT platforms do not pass a clock, and the clock
* frequency is passed through platform_data. On DT platforms,
* a clock must always be passed, even if there is no gatable
* clock associated to the SDIO interface (it can simply be a
* fixed rate clock).
*/
host->clk = devm_clk_get(&pdev->dev, NULL);
if (!IS_ERR(host->clk))
clk_prepare_enable(host->clk);
if (np) {
if (IS_ERR(host->clk)) {
dev_err(&pdev->dev, "DT platforms must have a clock associated\n");
ret = -EINVAL;
goto out;
}
host->base_clock = clk_get_rate(host->clk) / 2;
gpio_card_detect = of_get_named_gpio(np, "cd-gpios", 0);
gpio_write_protect = of_get_named_gpio(np, "wp-gpios", 0);
} else {
const struct mvsdio_platform_data *mvsd_data;
mvsd_data = pdev->dev.platform_data;
if (!mvsd_data) {
ret = -ENXIO;
goto out;
}
host->base_clock = mvsd_data->clock / 2;
gpio_card_detect = mvsd_data->gpio_card_detect;
gpio_write_protect = mvsd_data->gpio_write_protect;
}
mmc->ops = &mvsd_ops;
......@@ -765,43 +782,14 @@ static int __init mvsd_probe(struct platform_device *pdev)
goto out;
}
/* Not all platforms can gate the clock, so it is not
an error if the clock does not exists. */
host->clk = devm_clk_get(&pdev->dev, NULL);
if (!IS_ERR(host->clk))
clk_prepare_enable(host->clk);
if (mvsd_data->gpio_card_detect) {
ret = devm_gpio_request_one(&pdev->dev,
mvsd_data->gpio_card_detect,
GPIOF_IN, DRIVER_NAME " cd");
if (ret == 0) {
irq = gpio_to_irq(mvsd_data->gpio_card_detect);
ret = devm_request_irq(&pdev->dev, irq,
mvsd_card_detect_irq,
IRQ_TYPE_EDGE_RISING |
IRQ_TYPE_EDGE_FALLING,
DRIVER_NAME " cd", host);
if (ret == 0)
host->gpio_card_detect =
mvsd_data->gpio_card_detect;
else
devm_gpio_free(&pdev->dev,
mvsd_data->gpio_card_detect);
}
}
if (!host->gpio_card_detect)
if (gpio_is_valid(gpio_card_detect)) {
ret = mmc_gpio_request_cd(mmc, gpio_card_detect);
if (ret)
goto out;
} else
mmc->caps |= MMC_CAP_NEEDS_POLL;
if (mvsd_data->gpio_write_protect) {
ret = devm_gpio_request_one(&pdev->dev,
mvsd_data->gpio_write_protect,
GPIOF_IN, DRIVER_NAME " wp");
if (ret == 0) {
host->gpio_write_protect =
mvsd_data->gpio_write_protect;
}
}
mmc_gpio_request_ro(mmc, gpio_write_protect);
setup_timer(&host->timer, mvsd_timeout_timer, (unsigned long)host);
platform_set_drvdata(pdev, mmc);
......@@ -811,15 +799,17 @@ static int __init mvsd_probe(struct platform_device *pdev)
pr_notice("%s: %s driver initialized, ",
mmc_hostname(mmc), DRIVER_NAME);
if (host->gpio_card_detect)
if (!(mmc->caps & MMC_CAP_NEEDS_POLL))
printk("using GPIO %d for card detection\n",
host->gpio_card_detect);
gpio_card_detect);
else
printk("lacking card detect (fall back to polling)\n");
return 0;
out:
if (mmc) {
mmc_gpio_free_cd(mmc);
mmc_gpio_free_ro(mmc);
if (!IS_ERR(host->clk))
clk_disable_unprepare(host->clk);
mmc_free_host(mmc);
......@@ -834,6 +824,8 @@ static int __exit mvsd_remove(struct platform_device *pdev)
struct mvsd_host *host = mmc_priv(mmc);
mmc_gpio_free_cd(mmc);
mmc_gpio_free_ro(mmc);
mmc_remove_host(mmc);
del_timer_sync(&host->timer);
mvsd_power_down(host);
......@@ -873,12 +865,19 @@ static int mvsd_resume(struct platform_device *dev)
#define mvsd_resume NULL
#endif
static const struct of_device_id mvsdio_dt_ids[] = {
{ .compatible = "marvell,orion-sdio" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, mvsdio_dt_ids);
static struct platform_driver mvsd_driver = {
.remove = __exit_p(mvsd_remove),
.suspend = mvsd_suspend,
.resume = mvsd_resume,
.driver = {
.name = DRIVER_NAME,
.of_match_table = mvsdio_dt_ids,
},
};
......
......@@ -354,7 +354,7 @@ static void mxs_mmc_adtc(struct mxs_mmc_host *host)
struct dma_async_tx_descriptor *desc;
struct scatterlist *sgl = data->sg, *sg;
unsigned int sg_len = data->sg_len;
int i;
unsigned int i;
unsigned short dma_data_dir, timeout;
enum dma_transfer_direction slave_dirn;
......@@ -804,3 +804,4 @@ module_platform_driver(mxs_mmc_driver);
MODULE_DESCRIPTION("FREESCALE MXS MMC peripheral");
MODULE_AUTHOR("Freescale Semiconductor");
MODULE_LICENSE("GPL");
MODULE_ALIAS("platform:" DRIVER_NAME);
......@@ -146,7 +146,7 @@ struct mmc_spi_platform_data *mmc_spi_get_pdata(struct spi_device *spi)
oms->pdata.get_ro = of_mmc_spi_get_ro;
oms->detect_irq = irq_of_parse_and_map(np, 0);
if (oms->detect_irq != NO_IRQ) {
if (oms->detect_irq != 0) {
oms->pdata.init = of_mmc_spi_init;
oms->pdata.exit = of_mmc_spi_exit;
} else {
......
......@@ -1097,11 +1097,6 @@ static int sdmmc_switch_voltage(struct mmc_host *mmc, struct mmc_ios *ios)
voltage = OUTPUT_1V8;
if (voltage == OUTPUT_1V8) {
err = rtsx_pci_write_register(pcr,
SD30_DRIVE_SEL, 0x07, DRIVER_TYPE_B);
if (err < 0)
goto out;
err = sd_wait_voltage_stable_1(host);
if (err < 0)
goto out;
......
/*
* BCM2835 SDHCI
* Copyright (C) 2012 Stephen Warren
* Based on U-Boot's MMC driver for the BCM2835 by Oleksandr Tymoshenko & me
* Portions of the code there were obviously based on the Linux kernel at:
* git://github.com/raspberrypi/linux.git rpi-3.6.y
* commit f5b930b "Main bcm2708 linux port" signed-off-by Dom Cobley.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/delay.h>
#include <linux/module.h>
#include <linux/mmc/host.h>
#include "sdhci-pltfm.h"
/*
* 400KHz is max freq for card ID etc. Use that as min card clock. We need to
* know the min to enable static calculation of max BCM2835_SDHCI_WRITE_DELAY.
*/
#define MIN_FREQ 400000
/*
* The Arasan has a bugette whereby it may lose the content of successive
* writes to registers that are within two SD-card clock cycles of each other
* (a clock domain crossing problem). It seems, however, that the data
* register does not have this problem, which is just as well - otherwise we'd
* have to nobble the DMA engine too.
*
* This should probably be dynamically calculated based on the actual card
* frequency. However, this is the longest we'll have to wait, and doesn't
* seem to slow access down too much, so the added complexity doesn't seem
* worth it for now.
*
* 1/MIN_FREQ is (max) time per tick of eMMC clock.
* 2/MIN_FREQ is time for two ticks.
* Multiply by 1000000 to get uS per two ticks.
* *1000000 for uSecs.
* +1 for hack rounding.
*/
#define BCM2835_SDHCI_WRITE_DELAY (((2 * 1000000) / MIN_FREQ) + 1)
struct bcm2835_sdhci {
u32 shadow;
};
static void bcm2835_sdhci_writel(struct sdhci_host *host, u32 val, int reg)
{
writel(val, host->ioaddr + reg);
udelay(BCM2835_SDHCI_WRITE_DELAY);
}
static inline u32 bcm2835_sdhci_readl(struct sdhci_host *host, int reg)
{
u32 val = readl(host->ioaddr + reg);
if (reg == SDHCI_CAPABILITIES)
val |= SDHCI_CAN_VDD_330;
return val;
}
static void bcm2835_sdhci_writew(struct sdhci_host *host, u16 val, int reg)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct bcm2835_sdhci *bcm2835_host = pltfm_host->priv;
u32 oldval = (reg == SDHCI_COMMAND) ? bcm2835_host->shadow :
bcm2835_sdhci_readl(host, reg & ~3);
u32 word_num = (reg >> 1) & 1;
u32 word_shift = word_num * 16;
u32 mask = 0xffff << word_shift;
u32 newval = (oldval & ~mask) | (val << word_shift);
if (reg == SDHCI_TRANSFER_MODE)
bcm2835_host->shadow = newval;
else
bcm2835_sdhci_writel(host, newval, reg & ~3);
}
static u16 bcm2835_sdhci_readw(struct sdhci_host *host, int reg)
{
u32 val = bcm2835_sdhci_readl(host, (reg & ~3));
u32 word_num = (reg >> 1) & 1;
u32 word_shift = word_num * 16;
u32 word = (val >> word_shift) & 0xffff;
return word;
}
static void bcm2835_sdhci_writeb(struct sdhci_host *host, u8 val, int reg)
{
u32 oldval = bcm2835_sdhci_readl(host, reg & ~3);
u32 byte_num = reg & 3;
u32 byte_shift = byte_num * 8;
u32 mask = 0xff << byte_shift;
u32 newval = (oldval & ~mask) | (val << byte_shift);
bcm2835_sdhci_writel(host, newval, reg & ~3);
}
static u8 bcm2835_sdhci_readb(struct sdhci_host *host, int reg)
{
u32 val = bcm2835_sdhci_readl(host, (reg & ~3));
u32 byte_num = reg & 3;
u32 byte_shift = byte_num * 8;
u32 byte = (val >> byte_shift) & 0xff;
return byte;
}
unsigned int bcm2835_sdhci_get_min_clock(struct sdhci_host *host)
{
return MIN_FREQ;
}
static struct sdhci_ops bcm2835_sdhci_ops = {
.write_l = bcm2835_sdhci_writel,
.write_w = bcm2835_sdhci_writew,
.write_b = bcm2835_sdhci_writeb,
.read_l = bcm2835_sdhci_readl,
.read_w = bcm2835_sdhci_readw,
.read_b = bcm2835_sdhci_readb,
.get_max_clock = sdhci_pltfm_clk_get_max_clock,
.get_min_clock = bcm2835_sdhci_get_min_clock,
};
static struct sdhci_pltfm_data bcm2835_sdhci_pdata = {
.quirks = SDHCI_QUIRK_BROKEN_CARD_DETECTION |
SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK,
.ops = &bcm2835_sdhci_ops,
};
static int bcm2835_sdhci_probe(struct platform_device *pdev)
{
struct sdhci_host *host;
struct bcm2835_sdhci *bcm2835_host;
struct sdhci_pltfm_host *pltfm_host;
int ret;
host = sdhci_pltfm_init(pdev, &bcm2835_sdhci_pdata);
if (IS_ERR(host))
return PTR_ERR(host);
bcm2835_host = devm_kzalloc(&pdev->dev, sizeof(*bcm2835_host),
GFP_KERNEL);
if (!bcm2835_host) {
dev_err(mmc_dev(host->mmc),
"failed to allocate bcm2835_sdhci\n");
return -ENOMEM;
}
pltfm_host = sdhci_priv(host);
pltfm_host->priv = bcm2835_host;
pltfm_host->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(pltfm_host->clk)) {
ret = PTR_ERR(pltfm_host->clk);
goto err;
}
return sdhci_add_host(host);
err:
sdhci_pltfm_free(pdev);
return ret;
}
static int bcm2835_sdhci_remove(struct platform_device *pdev)
{
struct sdhci_host *host = platform_get_drvdata(pdev);
int dead = (readl(host->ioaddr + SDHCI_INT_STATUS) == 0xffffffff);
sdhci_remove_host(host, dead);
sdhci_pltfm_free(pdev);
return 0;
}
static const struct of_device_id bcm2835_sdhci_of_match[] = {
{ .compatible = "brcm,bcm2835-sdhci" },
{ }
};
MODULE_DEVICE_TABLE(of, bcm2835_sdhci_of_match);
static struct platform_driver bcm2835_sdhci_driver = {
.driver = {
.name = "sdhci-bcm2835",
.owner = THIS_MODULE,
.of_match_table = bcm2835_sdhci_of_match,
.pm = SDHCI_PLTFM_PMOPS,
},
.probe = bcm2835_sdhci_probe,
.remove = bcm2835_sdhci_remove,
};
module_platform_driver(bcm2835_sdhci_driver);
MODULE_DESCRIPTION("BCM2835 SDHCI driver");
MODULE_AUTHOR("Stephen Warren");
MODULE_LICENSE("GPL v2");
......@@ -21,6 +21,7 @@
#include <linux/mmc/host.h>
#include <linux/mmc/mmc.h>
#include <linux/mmc/sdio.h>
#include <linux/mmc/slot-gpio.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/of_gpio.h>
......@@ -29,12 +30,22 @@
#include "sdhci-pltfm.h"
#include "sdhci-esdhc.h"
#define SDHCI_CTRL_D3CD 0x08
#define ESDHC_CTRL_D3CD 0x08
/* VENDOR SPEC register */
#define SDHCI_VENDOR_SPEC 0xC0
#define SDHCI_VENDOR_SPEC_SDIO_QUIRK 0x00000002
#define SDHCI_WTMK_LVL 0x44
#define SDHCI_MIX_CTRL 0x48
#define ESDHC_VENDOR_SPEC 0xc0
#define ESDHC_VENDOR_SPEC_SDIO_QUIRK (1 << 1)
#define ESDHC_WTMK_LVL 0x44
#define ESDHC_MIX_CTRL 0x48
#define ESDHC_MIX_CTRL_AC23EN (1 << 7)
/* Bits 3 and 6 are not SDHCI standard definitions */
#define ESDHC_MIX_CTRL_SDHCI_MASK 0xb7
/*
* Our interpretation of the SDHCI_HOST_CONTROL register
*/
#define ESDHC_CTRL_4BITBUS (0x1 << 1)
#define ESDHC_CTRL_8BITBUS (0x2 << 1)
#define ESDHC_CTRL_BUSWIDTH_MASK (0x3 << 1)
/*
* There is an INT DMA ERR mis-match between eSDHC and STD SDHC SPEC:
......@@ -42,7 +53,7 @@
* but bit28 is used as the INT DMA ERR in fsl eSDHC design.
* Define this macro DMA error INT for fsl eSDHC
*/
#define SDHCI_INT_VENDOR_SPEC_DMA_ERR 0x10000000
#define ESDHC_INT_VENDOR_SPEC_DMA_ERR (1 << 28)
/*
* The CMDTYPE of the CMD register (offset 0xE) should be set to
......@@ -143,23 +154,8 @@ static inline void esdhc_clrset_le(struct sdhci_host *host, u32 mask, u32 val, i
static u32 esdhc_readl_le(struct sdhci_host *host, int reg)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct pltfm_imx_data *imx_data = pltfm_host->priv;
struct esdhc_platform_data *boarddata = &imx_data->boarddata;
/* fake CARD_PRESENT flag */
u32 val = readl(host->ioaddr + reg);
if (unlikely((reg == SDHCI_PRESENT_STATE)
&& gpio_is_valid(boarddata->cd_gpio))) {
if (gpio_get_value(boarddata->cd_gpio))
/* no card, if a valid gpio says so... */
val &= ~SDHCI_CARD_PRESENT;
else
/* ... in all other cases assume card is present */
val |= SDHCI_CARD_PRESENT;
}
if (unlikely(reg == SDHCI_CAPABILITIES)) {
/* In FSL esdhc IC module, only bit20 is used to indicate the
* ADMA2 capability of esdhc, but this bit is messed up on
......@@ -175,8 +171,8 @@ static u32 esdhc_readl_le(struct sdhci_host *host, int reg)
}
if (unlikely(reg == SDHCI_INT_STATUS)) {
if (val & SDHCI_INT_VENDOR_SPEC_DMA_ERR) {
val &= ~SDHCI_INT_VENDOR_SPEC_DMA_ERR;
if (val & ESDHC_INT_VENDOR_SPEC_DMA_ERR) {
val &= ~ESDHC_INT_VENDOR_SPEC_DMA_ERR;
val |= SDHCI_INT_ADMA_ERROR;
}
}
......@@ -188,17 +184,9 @@ static void esdhc_writel_le(struct sdhci_host *host, u32 val, int reg)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct pltfm_imx_data *imx_data = pltfm_host->priv;
struct esdhc_platform_data *boarddata = &imx_data->boarddata;
u32 data;
if (unlikely(reg == SDHCI_INT_ENABLE || reg == SDHCI_SIGNAL_ENABLE)) {
if (boarddata->cd_type == ESDHC_CD_GPIO)
/*
* These interrupts won't work with a custom
* card_detect gpio (only applied to mx25/35)
*/
val &= ~(SDHCI_INT_CARD_REMOVE | SDHCI_INT_CARD_INSERT);
if (val & SDHCI_INT_CARD_INT) {
/*
* Clear and then set D3CD bit to avoid missing the
......@@ -209,9 +197,9 @@ static void esdhc_writel_le(struct sdhci_host *host, u32 val, int reg)
* re-sample it by the following steps.
*/
data = readl(host->ioaddr + SDHCI_HOST_CONTROL);
data &= ~SDHCI_CTRL_D3CD;
data &= ~ESDHC_CTRL_D3CD;
writel(data, host->ioaddr + SDHCI_HOST_CONTROL);
data |= SDHCI_CTRL_D3CD;
data |= ESDHC_CTRL_D3CD;
writel(data, host->ioaddr + SDHCI_HOST_CONTROL);
}
}
......@@ -220,15 +208,15 @@ static void esdhc_writel_le(struct sdhci_host *host, u32 val, int reg)
&& (reg == SDHCI_INT_STATUS)
&& (val & SDHCI_INT_DATA_END))) {
u32 v;
v = readl(host->ioaddr + SDHCI_VENDOR_SPEC);
v &= ~SDHCI_VENDOR_SPEC_SDIO_QUIRK;
writel(v, host->ioaddr + SDHCI_VENDOR_SPEC);
v = readl(host->ioaddr + ESDHC_VENDOR_SPEC);
v &= ~ESDHC_VENDOR_SPEC_SDIO_QUIRK;
writel(v, host->ioaddr + ESDHC_VENDOR_SPEC);
}
if (unlikely(reg == SDHCI_INT_ENABLE || reg == SDHCI_SIGNAL_ENABLE)) {
if (val & SDHCI_INT_ADMA_ERROR) {
val &= ~SDHCI_INT_ADMA_ERROR;
val |= SDHCI_INT_VENDOR_SPEC_DMA_ERR;
val |= ESDHC_INT_VENDOR_SPEC_DMA_ERR;
}
}
......@@ -237,15 +225,18 @@ static void esdhc_writel_le(struct sdhci_host *host, u32 val, int reg)
static u16 esdhc_readw_le(struct sdhci_host *host, int reg)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct pltfm_imx_data *imx_data = pltfm_host->priv;
if (unlikely(reg == SDHCI_HOST_VERSION)) {
u16 val = readw(host->ioaddr + (reg ^ 2));
/*
* uSDHC supports SDHCI v3.0, but it's encoded as value
* 0x3 in host controller version register, which violates
* SDHCI_SPEC_300 definition. Work it around here.
*/
if ((val & SDHCI_SPEC_VER_MASK) == 3)
return --val;
reg ^= 2;
if (is_imx6q_usdhc(imx_data)) {
/*
* The usdhc register returns a wrong host version.
* Correct it here.
*/
return SDHCI_SPEC_300;
}
}
return readw(host->ioaddr + reg);
......@@ -258,20 +249,32 @@ static void esdhc_writew_le(struct sdhci_host *host, u16 val, int reg)
switch (reg) {
case SDHCI_TRANSFER_MODE:
/*
* Postpone this write, we must do it together with a
* command write that is down below.
*/
if ((imx_data->flags & ESDHC_FLAG_MULTIBLK_NO_INT)
&& (host->cmd->opcode == SD_IO_RW_EXTENDED)
&& (host->cmd->data->blocks > 1)
&& (host->cmd->data->flags & MMC_DATA_READ)) {
u32 v;
v = readl(host->ioaddr + SDHCI_VENDOR_SPEC);
v |= SDHCI_VENDOR_SPEC_SDIO_QUIRK;
writel(v, host->ioaddr + SDHCI_VENDOR_SPEC);
v = readl(host->ioaddr + ESDHC_VENDOR_SPEC);
v |= ESDHC_VENDOR_SPEC_SDIO_QUIRK;
writel(v, host->ioaddr + ESDHC_VENDOR_SPEC);
}
if (is_imx6q_usdhc(imx_data)) {
u32 m = readl(host->ioaddr + ESDHC_MIX_CTRL);
/* Swap AC23 bit */
if (val & SDHCI_TRNS_AUTO_CMD23) {
val &= ~SDHCI_TRNS_AUTO_CMD23;
val |= ESDHC_MIX_CTRL_AC23EN;
}
m = val | (m & ~ESDHC_MIX_CTRL_SDHCI_MASK);
writel(m, host->ioaddr + ESDHC_MIX_CTRL);
} else {
/*
* Postpone this write, we must do it together with a
* command write that is down below.
*/
imx_data->scratchpad = val;
}
imx_data->scratchpad = val;
return;
case SDHCI_COMMAND:
if ((host->cmd->opcode == MMC_STOP_TRANSMISSION ||
......@@ -279,16 +282,12 @@ static void esdhc_writew_le(struct sdhci_host *host, u16 val, int reg)
(imx_data->flags & ESDHC_FLAG_MULTIBLK_NO_INT))
val |= SDHCI_CMD_ABORTCMD;
if (is_imx6q_usdhc(imx_data)) {
u32 m = readl(host->ioaddr + SDHCI_MIX_CTRL);
m = imx_data->scratchpad | (m & 0xffff0000);
writel(m, host->ioaddr + SDHCI_MIX_CTRL);
if (is_imx6q_usdhc(imx_data))
writel(val << 16,
host->ioaddr + SDHCI_TRANSFER_MODE);
} else {
else
writel(val << 16 | imx_data->scratchpad,
host->ioaddr + SDHCI_TRANSFER_MODE);
}
return;
case SDHCI_BLOCK_SIZE:
val &= ~SDHCI_MAKE_BLKSZ(0x7, 0);
......@@ -302,6 +301,7 @@ static void esdhc_writeb_le(struct sdhci_host *host, u8 val, int reg)
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct pltfm_imx_data *imx_data = pltfm_host->priv;
u32 new_val;
u32 mask;
switch (reg) {
case SDHCI_POWER_CONTROL:
......@@ -311,10 +311,8 @@ static void esdhc_writeb_le(struct sdhci_host *host, u8 val, int reg)
*/
return;
case SDHCI_HOST_CONTROL:
/* FSL messed up here, so we can just keep those three */
new_val = val & (SDHCI_CTRL_LED | \
SDHCI_CTRL_4BITBUS | \
SDHCI_CTRL_D3CD);
/* FSL messed up here, so we need to manually compose it. */
new_val = val & SDHCI_CTRL_LED;
/* ensure the endianness */
new_val |= ESDHC_HOST_CONTROL_LE;
/* bits 8&9 are reserved on mx25 */
......@@ -323,7 +321,13 @@ static void esdhc_writeb_le(struct sdhci_host *host, u8 val, int reg)
new_val |= (val & SDHCI_CTRL_DMA_MASK) << 5;
}
esdhc_clrset_le(host, 0xffff, new_val, reg);
/*
* Do not touch buswidth bits here. This is done in
* esdhc_pltfm_bus_width.
*/
mask = 0xffff & ~ESDHC_CTRL_BUSWIDTH_MASK;
esdhc_clrset_le(host, mask, new_val, reg);
return;
}
esdhc_clrset_le(host, 0xff, val, reg);
......@@ -336,15 +340,15 @@ static void esdhc_writeb_le(struct sdhci_host *host, u8 val, int reg)
* circuit relies on. To work around it, we turn the clocks on back
* to keep card detection circuit functional.
*/
if ((reg == SDHCI_SOFTWARE_RESET) && (val & 1))
if ((reg == SDHCI_SOFTWARE_RESET) && (val & 1)) {
esdhc_clrset_le(host, 0x7, 0x7, ESDHC_SYSTEM_CONTROL);
}
static unsigned int esdhc_pltfm_get_max_clock(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
return clk_get_rate(pltfm_host->clk);
/*
* The reset on usdhc fails to clear MIX_CTRL register.
* Do it manually here.
*/
if (is_imx6q_usdhc(imx_data))
writel(0, host->ioaddr + ESDHC_MIX_CTRL);
}
}
static unsigned int esdhc_pltfm_get_min_clock(struct sdhci_host *host)
......@@ -362,8 +366,7 @@ static unsigned int esdhc_pltfm_get_ro(struct sdhci_host *host)
switch (boarddata->wp_type) {
case ESDHC_WP_GPIO:
if (gpio_is_valid(boarddata->wp_gpio))
return gpio_get_value(boarddata->wp_gpio);
return mmc_gpio_get_ro(host->mmc);
case ESDHC_WP_CONTROLLER:
return !(readl(host->ioaddr + SDHCI_PRESENT_STATE) &
SDHCI_WRITE_PROTECT);
......@@ -374,6 +377,28 @@ static unsigned int esdhc_pltfm_get_ro(struct sdhci_host *host)
return -ENOSYS;
}
static int esdhc_pltfm_bus_width(struct sdhci_host *host, int width)
{
u32 ctrl;
switch (width) {
case MMC_BUS_WIDTH_8:
ctrl = ESDHC_CTRL_8BITBUS;
break;
case MMC_BUS_WIDTH_4:
ctrl = ESDHC_CTRL_4BITBUS;
break;
default:
ctrl = 0;
break;
}
esdhc_clrset_le(host, ESDHC_CTRL_BUSWIDTH_MASK, ctrl,
SDHCI_HOST_CONTROL);
return 0;
}
static struct sdhci_ops sdhci_esdhc_ops = {
.read_l = esdhc_readl_le,
.read_w = esdhc_readw_le,
......@@ -381,9 +406,10 @@ static struct sdhci_ops sdhci_esdhc_ops = {
.write_w = esdhc_writew_le,
.write_b = esdhc_writeb_le,
.set_clock = esdhc_set_clock,
.get_max_clock = esdhc_pltfm_get_max_clock,
.get_max_clock = sdhci_pltfm_clk_get_max_clock,
.get_min_clock = esdhc_pltfm_get_min_clock,
.get_ro = esdhc_pltfm_get_ro,
.platform_bus_width = esdhc_pltfm_bus_width,
};
static struct sdhci_pltfm_data sdhci_esdhc_imx_pdata = {
......@@ -394,14 +420,6 @@ static struct sdhci_pltfm_data sdhci_esdhc_imx_pdata = {
.ops = &sdhci_esdhc_ops,
};
static irqreturn_t cd_irq(int irq, void *data)
{
struct sdhci_host *sdhost = (struct sdhci_host *)data;
tasklet_schedule(&sdhost->card_tasklet);
return IRQ_HANDLED;
};
#ifdef CONFIG_OF
static int
sdhci_esdhc_imx_probe_dt(struct platform_device *pdev,
......@@ -429,6 +447,8 @@ sdhci_esdhc_imx_probe_dt(struct platform_device *pdev,
if (gpio_is_valid(boarddata->wp_gpio))
boarddata->wp_type = ESDHC_WP_GPIO;
of_property_read_u32(np, "bus-width", &boarddata->max_bus_width);
return 0;
}
#else
......@@ -512,7 +532,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
* to something insane. Change it back here.
*/
if (is_imx6q_usdhc(imx_data))
writel(0x08100810, host->ioaddr + SDHCI_WTMK_LVL);
writel(0x08100810, host->ioaddr + ESDHC_WTMK_LVL);
boarddata = &imx_data->boarddata;
if (sdhci_esdhc_imx_probe_dt(pdev, boarddata) < 0) {
......@@ -527,37 +547,22 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
/* write_protect */
if (boarddata->wp_type == ESDHC_WP_GPIO) {
err = devm_gpio_request_one(&pdev->dev, boarddata->wp_gpio,
GPIOF_IN, "ESDHC_WP");
err = mmc_gpio_request_ro(host->mmc, boarddata->wp_gpio);
if (err) {
dev_warn(mmc_dev(host->mmc),
"no write-protect pin available!\n");
boarddata->wp_gpio = -EINVAL;
dev_err(mmc_dev(host->mmc),
"failed to request write-protect gpio!\n");
goto disable_clk;
}
} else {
boarddata->wp_gpio = -EINVAL;
host->mmc->caps2 |= MMC_CAP2_RO_ACTIVE_HIGH;
}
/* card_detect */
if (boarddata->cd_type != ESDHC_CD_GPIO)
boarddata->cd_gpio = -EINVAL;
switch (boarddata->cd_type) {
case ESDHC_CD_GPIO:
err = devm_gpio_request_one(&pdev->dev, boarddata->cd_gpio,
GPIOF_IN, "ESDHC_CD");
err = mmc_gpio_request_cd(host->mmc, boarddata->cd_gpio);
if (err) {
dev_err(mmc_dev(host->mmc),
"no card-detect pin available!\n");
goto disable_clk;
}
err = devm_request_irq(&pdev->dev,
gpio_to_irq(boarddata->cd_gpio), cd_irq,
IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING,
mmc_hostname(host->mmc), host);
if (err) {
dev_err(mmc_dev(host->mmc), "request irq error\n");
"failed to request card-detect gpio!\n");
goto disable_clk;
}
/* fall through */
......@@ -575,6 +580,19 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
break;
}
switch (boarddata->max_bus_width) {
case 8:
host->mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_4_BIT_DATA;
break;
case 4:
host->mmc->caps |= MMC_CAP_4_BIT_DATA;
break;
case 1:
default:
host->quirks |= SDHCI_QUIRK_FORCE_1_BIT_DATA;
break;
}
err = sdhci_add_host(host);
if (err)
goto disable_clk;
......
......@@ -935,7 +935,7 @@ static int sdhci_pci_enable_dma(struct sdhci_host *host)
return 0;
}
static int sdhci_pci_8bit_width(struct sdhci_host *host, int width)
static int sdhci_pci_bus_width(struct sdhci_host *host, int width)
{
u8 ctrl;
......@@ -977,7 +977,7 @@ static void sdhci_pci_hw_reset(struct sdhci_host *host)
static struct sdhci_ops sdhci_pci_ops = {
.enable_dma = sdhci_pci_enable_dma,
.platform_8bit_width = sdhci_pci_8bit_width,
.platform_bus_width = sdhci_pci_bus_width,
.hw_reset = sdhci_pci_hw_reset,
};
......
......@@ -36,6 +36,14 @@
#endif
#include "sdhci-pltfm.h"
unsigned int sdhci_pltfm_clk_get_max_clock(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
return clk_get_rate(pltfm_host->clk);
}
EXPORT_SYMBOL_GPL(sdhci_pltfm_clk_get_max_clock);
static struct sdhci_ops sdhci_pltfm_ops = {
};
......
......@@ -98,6 +98,8 @@ extern int sdhci_pltfm_register(struct platform_device *pdev,
struct sdhci_pltfm_data *pdata);
extern int sdhci_pltfm_unregister(struct platform_device *pdev);
extern unsigned int sdhci_pltfm_clk_get_max_clock(struct sdhci_host *host);
#ifdef CONFIG_PM
extern const struct dev_pm_ops sdhci_pltfm_pmops;
#define SDHCI_PLTFM_PMOPS (&sdhci_pltfm_pmops)
......
......@@ -111,17 +111,10 @@ static int pxav2_mmc_set_width(struct sdhci_host *host, int width)
return 0;
}
static u32 pxav2_get_max_clock(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
return clk_get_rate(pltfm_host->clk);
}
static struct sdhci_ops pxav2_sdhci_ops = {
.get_max_clock = pxav2_get_max_clock,
.get_max_clock = sdhci_pltfm_clk_get_max_clock,
.platform_reset_exit = pxav2_set_private_registers,
.platform_8bit_width = pxav2_mmc_set_width,
.platform_bus_width = pxav2_mmc_set_width,
};
#ifdef CONFIG_OF
......
......@@ -32,10 +32,14 @@
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/of_gpio.h>
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include "sdhci.h"
#include "sdhci-pltfm.h"
#define PXAV3_RPM_DELAY_MS 50
#define SD_CLOCK_BURST_SIZE_SETUP 0x10A
#define SDCLK_SEL 0x100
#define SDCLK_DELAY_SHIFT 9
......@@ -163,18 +167,11 @@ static int pxav3_set_uhs_signaling(struct sdhci_host *host, unsigned int uhs)
return 0;
}
static u32 pxav3_get_max_clock(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
return clk_get_rate(pltfm_host->clk);
}
static struct sdhci_ops pxav3_sdhci_ops = {
.platform_reset_exit = pxav3_set_private_registers,
.set_uhs_signaling = pxav3_set_uhs_signaling,
.platform_send_init_74_clocks = pxav3_gen_init_74_clocks,
.get_max_clock = pxav3_get_max_clock,
.get_max_clock = sdhci_pltfm_clk_get_max_clock,
};
#ifdef CONFIG_OF
......@@ -303,20 +300,37 @@ static int sdhci_pxav3_probe(struct platform_device *pdev)
sdhci_get_of_property(pdev);
pm_runtime_set_active(&pdev->dev);
pm_runtime_enable(&pdev->dev);
pm_runtime_set_autosuspend_delay(&pdev->dev, PXAV3_RPM_DELAY_MS);
pm_runtime_use_autosuspend(&pdev->dev);
pm_suspend_ignore_children(&pdev->dev, 1);
pm_runtime_get_noresume(&pdev->dev);
ret = sdhci_add_host(host);
if (ret) {
dev_err(&pdev->dev, "failed to add host\n");
pm_runtime_forbid(&pdev->dev);
pm_runtime_disable(&pdev->dev);
goto err_add_host;
}
platform_set_drvdata(pdev, host);
if (pdata->pm_caps & MMC_PM_KEEP_POWER) {
device_init_wakeup(&pdev->dev, 1);
host->mmc->pm_flags |= MMC_PM_WAKE_SDIO_IRQ;
} else {
device_init_wakeup(&pdev->dev, 0);
}
pm_runtime_put_autosuspend(&pdev->dev);
return 0;
err_add_host:
clk_disable_unprepare(clk);
clk_put(clk);
mmc_gpio_free_cd(host->mmc);
err_cd_req:
err_clk_get:
sdhci_pltfm_free(pdev);
......@@ -329,16 +343,14 @@ static int sdhci_pxav3_remove(struct platform_device *pdev)
struct sdhci_host *host = platform_get_drvdata(pdev);
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_pxa *pxa = pltfm_host->priv;
struct sdhci_pxa_platdata *pdata = pdev->dev.platform_data;
pm_runtime_get_sync(&pdev->dev);
sdhci_remove_host(host, 1);
pm_runtime_disable(&pdev->dev);
clk_disable_unprepare(pltfm_host->clk);
clk_put(pltfm_host->clk);
if (gpio_is_valid(pdata->ext_cd_gpio))
mmc_gpio_free_cd(host->mmc);
sdhci_pltfm_free(pdev);
kfree(pxa);
......@@ -347,6 +359,83 @@ static int sdhci_pxav3_remove(struct platform_device *pdev)
return 0;
}
#ifdef CONFIG_PM_SLEEP
static int sdhci_pxav3_suspend(struct device *dev)
{
int ret;
struct sdhci_host *host = dev_get_drvdata(dev);
pm_runtime_get_sync(dev);
ret = sdhci_suspend_host(host);
pm_runtime_mark_last_busy(dev);
pm_runtime_put_autosuspend(dev);
return ret;
}
static int sdhci_pxav3_resume(struct device *dev)
{
int ret;
struct sdhci_host *host = dev_get_drvdata(dev);
pm_runtime_get_sync(dev);
ret = sdhci_resume_host(host);
pm_runtime_mark_last_busy(dev);
pm_runtime_put_autosuspend(dev);
return ret;
}
#endif
#ifdef CONFIG_PM_RUNTIME
static int sdhci_pxav3_runtime_suspend(struct device *dev)
{
struct sdhci_host *host = dev_get_drvdata(dev);
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
unsigned long flags;
if (pltfm_host->clk) {
spin_lock_irqsave(&host->lock, flags);
host->runtime_suspended = true;
spin_unlock_irqrestore(&host->lock, flags);
clk_disable_unprepare(pltfm_host->clk);
}
return 0;
}
static int sdhci_pxav3_runtime_resume(struct device *dev)
{
struct sdhci_host *host = dev_get_drvdata(dev);
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
unsigned long flags;
if (pltfm_host->clk) {
clk_prepare_enable(pltfm_host->clk);
spin_lock_irqsave(&host->lock, flags);
host->runtime_suspended = false;
spin_unlock_irqrestore(&host->lock, flags);
}
return 0;
}
#endif
#ifdef CONFIG_PM
static const struct dev_pm_ops sdhci_pxav3_pmops = {
SET_SYSTEM_SLEEP_PM_OPS(sdhci_pxav3_suspend, sdhci_pxav3_resume)
SET_RUNTIME_PM_OPS(sdhci_pxav3_runtime_suspend,
sdhci_pxav3_runtime_resume, NULL)
};
#define SDHCI_PXAV3_PMOPS (&sdhci_pxav3_pmops)
#else
#define SDHCI_PXAV3_PMOPS NULL
#endif
static struct platform_driver sdhci_pxav3_driver = {
.driver = {
.name = "sdhci-pxav3",
......@@ -354,7 +443,7 @@ static struct platform_driver sdhci_pxav3_driver = {
.of_match_table = sdhci_pxav3_of_match,
#endif
.owner = THIS_MODULE,
.pm = SDHCI_PLTFM_PMOPS,
.pm = SDHCI_PXAV3_PMOPS,
},
.probe = sdhci_pxav3_probe,
.remove = sdhci_pxav3_remove,
......
......@@ -332,14 +332,14 @@ static void sdhci_cmu_set_clock(struct sdhci_host *host, unsigned int clock)
}
/**
* sdhci_s3c_platform_8bit_width - support 8bit buswidth
* sdhci_s3c_platform_bus_width - support 8bit buswidth
* @host: The SDHCI host being queried
* @width: MMC_BUS_WIDTH_ macro for the bus width being requested
*
* We have 8-bit width support but is not a v3 controller.
* So we add platform_8bit_width() and support 8bit width.
* So we add platform_bus_width() and support 8bit width.
*/
static int sdhci_s3c_platform_8bit_width(struct sdhci_host *host, int width)
static int sdhci_s3c_platform_bus_width(struct sdhci_host *host, int width)
{
u8 ctrl;
......@@ -369,7 +369,7 @@ static struct sdhci_ops sdhci_s3c_ops = {
.get_max_clock = sdhci_s3c_get_max_clk,
.set_clock = sdhci_s3c_set_clock,
.get_min_clock = sdhci_s3c_get_min_clock,
.platform_8bit_width = sdhci_s3c_platform_8bit_width,
.platform_bus_width = sdhci_s3c_platform_bus_width,
};
static void sdhci_s3c_notify_change(struct platform_device *dev, int state)
......
......@@ -27,8 +27,6 @@
#include <asm/gpio.h>
#include <linux/platform_data/mmc-sdhci-tegra.h>
#include "sdhci-pltfm.h"
/* Tegra SDHOST controller vendor register definitions */
......@@ -45,8 +43,11 @@ struct sdhci_tegra_soc_data {
};
struct sdhci_tegra {
const struct tegra_sdhci_platform_data *plat;
const struct sdhci_tegra_soc_data *soc_data;
int cd_gpio;
int wp_gpio;
int power_gpio;
int is_8bit;
};
static u32 tegra_sdhci_readl(struct sdhci_host *host, int reg)
......@@ -108,12 +109,11 @@ static unsigned int tegra_sdhci_get_ro(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_tegra *tegra_host = pltfm_host->priv;
const struct tegra_sdhci_platform_data *plat = tegra_host->plat;
if (!gpio_is_valid(plat->wp_gpio))
if (!gpio_is_valid(tegra_host->wp_gpio))
return -1;
return gpio_get_value(plat->wp_gpio);
return gpio_get_value(tegra_host->wp_gpio);
}
static irqreturn_t carddetect_irq(int irq, void *data)
......@@ -143,15 +143,14 @@ static void tegra_sdhci_reset_exit(struct sdhci_host *host, u8 mask)
}
}
static int tegra_sdhci_8bit(struct sdhci_host *host, int bus_width)
static int tegra_sdhci_buswidth(struct sdhci_host *host, int bus_width)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_tegra *tegra_host = pltfm_host->priv;
const struct tegra_sdhci_platform_data *plat = tegra_host->plat;
u32 ctrl;
ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
if (plat->is_8bit && bus_width == MMC_BUS_WIDTH_8) {
if (tegra_host->is_8bit && bus_width == MMC_BUS_WIDTH_8) {
ctrl &= ~SDHCI_CTRL_4BITBUS;
ctrl |= SDHCI_CTRL_8BITBUS;
} else {
......@@ -170,7 +169,7 @@ static struct sdhci_ops tegra_sdhci_ops = {
.read_l = tegra_sdhci_readl,
.read_w = tegra_sdhci_readw,
.write_l = tegra_sdhci_writel,
.platform_8bit_width = tegra_sdhci_8bit,
.platform_bus_width = tegra_sdhci_buswidth,
.platform_reset_exit = tegra_sdhci_reset_exit,
};
......@@ -217,31 +216,19 @@ static const struct of_device_id sdhci_tegra_dt_match[] = {
};
MODULE_DEVICE_TABLE(of, sdhci_dt_ids);
static struct tegra_sdhci_platform_data *sdhci_tegra_dt_parse_pdata(
struct platform_device *pdev)
static void sdhci_tegra_parse_dt(struct device *dev,
struct sdhci_tegra *tegra_host)
{
struct tegra_sdhci_platform_data *plat;
struct device_node *np = pdev->dev.of_node;
struct device_node *np = dev->of_node;
u32 bus_width;
if (!np)
return NULL;
plat = devm_kzalloc(&pdev->dev, sizeof(*plat), GFP_KERNEL);
if (!plat) {
dev_err(&pdev->dev, "Can't allocate platform data\n");
return NULL;
}
plat->cd_gpio = of_get_named_gpio(np, "cd-gpios", 0);
plat->wp_gpio = of_get_named_gpio(np, "wp-gpios", 0);
plat->power_gpio = of_get_named_gpio(np, "power-gpios", 0);
tegra_host->cd_gpio = of_get_named_gpio(np, "cd-gpios", 0);
tegra_host->wp_gpio = of_get_named_gpio(np, "wp-gpios", 0);
tegra_host->power_gpio = of_get_named_gpio(np, "power-gpios", 0);
if (of_property_read_u32(np, "bus-width", &bus_width) == 0 &&
bus_width == 8)
plat->is_8bit = 1;
return plat;
tegra_host->is_8bit = 1;
}
static int sdhci_tegra_probe(struct platform_device *pdev)
......@@ -250,7 +237,6 @@ static int sdhci_tegra_probe(struct platform_device *pdev)
const struct sdhci_tegra_soc_data *soc_data;
struct sdhci_host *host;
struct sdhci_pltfm_host *pltfm_host;
struct tegra_sdhci_platform_data *plat;
struct sdhci_tegra *tegra_host;
struct clk *clk;
int rc;
......@@ -263,52 +249,40 @@ static int sdhci_tegra_probe(struct platform_device *pdev)
host = sdhci_pltfm_init(pdev, soc_data->pdata);
if (IS_ERR(host))
return PTR_ERR(host);
pltfm_host = sdhci_priv(host);
plat = pdev->dev.platform_data;
if (plat == NULL)
plat = sdhci_tegra_dt_parse_pdata(pdev);
if (plat == NULL) {
dev_err(mmc_dev(host->mmc), "missing platform data\n");
rc = -ENXIO;
goto err_no_plat;
}
tegra_host = devm_kzalloc(&pdev->dev, sizeof(*tegra_host), GFP_KERNEL);
if (!tegra_host) {
dev_err(mmc_dev(host->mmc), "failed to allocate tegra_host\n");
rc = -ENOMEM;
goto err_no_plat;
goto err_alloc_tegra_host;
}
tegra_host->plat = plat;
tegra_host->soc_data = soc_data;
pltfm_host->priv = tegra_host;
if (gpio_is_valid(plat->power_gpio)) {
rc = gpio_request(plat->power_gpio, "sdhci_power");
sdhci_tegra_parse_dt(&pdev->dev, tegra_host);
if (gpio_is_valid(tegra_host->power_gpio)) {
rc = gpio_request(tegra_host->power_gpio, "sdhci_power");
if (rc) {
dev_err(mmc_dev(host->mmc),
"failed to allocate power gpio\n");
goto err_power_req;
}
gpio_direction_output(plat->power_gpio, 1);
gpio_direction_output(tegra_host->power_gpio, 1);
}
if (gpio_is_valid(plat->cd_gpio)) {
rc = gpio_request(plat->cd_gpio, "sdhci_cd");
if (gpio_is_valid(tegra_host->cd_gpio)) {
rc = gpio_request(tegra_host->cd_gpio, "sdhci_cd");
if (rc) {
dev_err(mmc_dev(host->mmc),
"failed to allocate cd gpio\n");
goto err_cd_req;
}
gpio_direction_input(plat->cd_gpio);
gpio_direction_input(tegra_host->cd_gpio);
rc = request_irq(gpio_to_irq(plat->cd_gpio), carddetect_irq,
rc = request_irq(gpio_to_irq(tegra_host->cd_gpio),
carddetect_irq,
IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING,
mmc_hostname(host->mmc), host);
......@@ -319,14 +293,14 @@ static int sdhci_tegra_probe(struct platform_device *pdev)
}
if (gpio_is_valid(plat->wp_gpio)) {
rc = gpio_request(plat->wp_gpio, "sdhci_wp");
if (gpio_is_valid(tegra_host->wp_gpio)) {
rc = gpio_request(tegra_host->wp_gpio, "sdhci_wp");
if (rc) {
dev_err(mmc_dev(host->mmc),
"failed to allocate wp gpio\n");
goto err_wp_req;
}
gpio_direction_input(plat->wp_gpio);
gpio_direction_input(tegra_host->wp_gpio);
}
clk = clk_get(mmc_dev(host->mmc), NULL);
......@@ -338,9 +312,7 @@ static int sdhci_tegra_probe(struct platform_device *pdev)
clk_prepare_enable(clk);
pltfm_host->clk = clk;
host->mmc->pm_caps = plat->pm_flags;
if (plat->is_8bit)
if (tegra_host->is_8bit)
host->mmc->caps |= MMC_CAP_8_BIT_DATA;
rc = sdhci_add_host(host);
......@@ -353,19 +325,19 @@ static int sdhci_tegra_probe(struct platform_device *pdev)
clk_disable_unprepare(pltfm_host->clk);
clk_put(pltfm_host->clk);
err_clk_get:
if (gpio_is_valid(plat->wp_gpio))
gpio_free(plat->wp_gpio);
if (gpio_is_valid(tegra_host->wp_gpio))
gpio_free(tegra_host->wp_gpio);
err_wp_req:
if (gpio_is_valid(plat->cd_gpio))
free_irq(gpio_to_irq(plat->cd_gpio), host);
if (gpio_is_valid(tegra_host->cd_gpio))
free_irq(gpio_to_irq(tegra_host->cd_gpio), host);
err_cd_irq_req:
if (gpio_is_valid(plat->cd_gpio))
gpio_free(plat->cd_gpio);
if (gpio_is_valid(tegra_host->cd_gpio))
gpio_free(tegra_host->cd_gpio);
err_cd_req:
if (gpio_is_valid(plat->power_gpio))
gpio_free(plat->power_gpio);
if (gpio_is_valid(tegra_host->power_gpio))
gpio_free(tegra_host->power_gpio);
err_power_req:
err_no_plat:
err_alloc_tegra_host:
sdhci_pltfm_free(pdev);
return rc;
}
......@@ -375,21 +347,20 @@ static int sdhci_tegra_remove(struct platform_device *pdev)
struct sdhci_host *host = platform_get_drvdata(pdev);
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_tegra *tegra_host = pltfm_host->priv;
const struct tegra_sdhci_platform_data *plat = tegra_host->plat;
int dead = (readl(host->ioaddr + SDHCI_INT_STATUS) == 0xffffffff);
sdhci_remove_host(host, dead);
if (gpio_is_valid(plat->wp_gpio))
gpio_free(plat->wp_gpio);
if (gpio_is_valid(tegra_host->wp_gpio))
gpio_free(tegra_host->wp_gpio);
if (gpio_is_valid(plat->cd_gpio)) {
free_irq(gpio_to_irq(plat->cd_gpio), host);
gpio_free(plat->cd_gpio);
if (gpio_is_valid(tegra_host->cd_gpio)) {
free_irq(gpio_to_irq(tegra_host->cd_gpio), host);
gpio_free(tegra_host->cd_gpio);
}
if (gpio_is_valid(plat->power_gpio))
gpio_free(plat->power_gpio);
if (gpio_is_valid(tegra_host->power_gpio))
gpio_free(tegra_host->power_gpio);
clk_disable_unprepare(pltfm_host->clk);
clk_put(pltfm_host->clk);
......
......@@ -53,6 +53,7 @@ static void sdhci_send_command(struct sdhci_host *, struct mmc_command *);
static void sdhci_finish_command(struct sdhci_host *);
static int sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode);
static void sdhci_tuning_timer(unsigned long data);
static void sdhci_enable_preset_value(struct sdhci_host *host, bool enable);
#ifdef CONFIG_PM_RUNTIME
static int sdhci_runtime_pm_get(struct sdhci_host *host);
......@@ -1082,6 +1083,37 @@ static void sdhci_finish_command(struct sdhci_host *host)
}
}
static u16 sdhci_get_preset_value(struct sdhci_host *host)
{
u16 ctrl, preset = 0;
ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
switch (ctrl & SDHCI_CTRL_UHS_MASK) {
case SDHCI_CTRL_UHS_SDR12:
preset = sdhci_readw(host, SDHCI_PRESET_FOR_SDR12);
break;
case SDHCI_CTRL_UHS_SDR25:
preset = sdhci_readw(host, SDHCI_PRESET_FOR_SDR25);
break;
case SDHCI_CTRL_UHS_SDR50:
preset = sdhci_readw(host, SDHCI_PRESET_FOR_SDR50);
break;
case SDHCI_CTRL_UHS_SDR104:
preset = sdhci_readw(host, SDHCI_PRESET_FOR_SDR104);
break;
case SDHCI_CTRL_UHS_DDR50:
preset = sdhci_readw(host, SDHCI_PRESET_FOR_DDR50);
break;
default:
pr_warn("%s: Invalid UHS-I mode selected\n",
mmc_hostname(host->mmc));
preset = sdhci_readw(host, SDHCI_PRESET_FOR_SDR12);
break;
}
return preset;
}
static void sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
{
int div = 0; /* Initialized for compiler warning */
......@@ -1106,35 +1138,43 @@ static void sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
goto out;
if (host->version >= SDHCI_SPEC_300) {
if (sdhci_readw(host, SDHCI_HOST_CONTROL2) &
SDHCI_CTRL_PRESET_VAL_ENABLE) {
u16 pre_val;
clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL);
pre_val = sdhci_get_preset_value(host);
div = (pre_val & SDHCI_PRESET_SDCLK_FREQ_MASK)
>> SDHCI_PRESET_SDCLK_FREQ_SHIFT;
if (host->clk_mul &&
(pre_val & SDHCI_PRESET_CLKGEN_SEL_MASK)) {
clk = SDHCI_PROG_CLOCK_MODE;
real_div = div + 1;
clk_mul = host->clk_mul;
} else {
real_div = max_t(int, 1, div << 1);
}
goto clock_set;
}
/*
* Check if the Host Controller supports Programmable Clock
* Mode.
*/
if (host->clk_mul) {
u16 ctrl;
for (div = 1; div <= 1024; div++) {
if ((host->max_clk * host->clk_mul / div)
<= clock)
break;
}
/*
* We need to figure out whether the Host Driver needs
* to select Programmable Clock Mode, or the value can
* be set automatically by the Host Controller based on
* the Preset Value registers.
* Set Programmable Clock Mode in the Clock
* Control register.
*/
ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
if (!(ctrl & SDHCI_CTRL_PRESET_VAL_ENABLE)) {
for (div = 1; div <= 1024; div++) {
if (((host->max_clk * host->clk_mul) /
div) <= clock)
break;
}
/*
* Set Programmable Clock Mode in the Clock
* Control register.
*/
clk = SDHCI_PROG_CLOCK_MODE;
real_div = div;
clk_mul = host->clk_mul;
div--;
}
clk = SDHCI_PROG_CLOCK_MODE;
real_div = div;
clk_mul = host->clk_mul;
div--;
} else {
/* Version 3.00 divisors must be a multiple of 2. */
if (host->max_clk <= clock)
......@@ -1159,6 +1199,7 @@ static void sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
div >>= 1;
}
clock_set:
if (real_div)
host->mmc->actual_clock = (host->max_clk * clk_mul) / real_div;
......@@ -1189,6 +1230,15 @@ static void sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
host->clock = clock;
}
static inline void sdhci_update_clock(struct sdhci_host *host)
{
unsigned int clock;
clock = host->clock;
host->clock = 0;
sdhci_set_clock(host, clock);
}
static int sdhci_set_power(struct sdhci_host *host, unsigned short power)
{
u8 pwr = 0;
......@@ -1258,7 +1308,7 @@ static int sdhci_set_power(struct sdhci_host *host, unsigned short power)
static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
{
struct sdhci_host *host;
bool present;
int present;
unsigned long flags;
u32 tuning_opcode;
......@@ -1287,18 +1337,21 @@ static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
host->mrq = mrq;
/* If polling, assume that the card is always present. */
if (host->quirks & SDHCI_QUIRK_BROKEN_CARD_DETECTION)
present = true;
else
present = sdhci_readl(host, SDHCI_PRESENT_STATE) &
SDHCI_CARD_PRESENT;
/* If we're using a cd-gpio, testing the presence bit might fail. */
if (!present) {
int ret = mmc_gpio_get_cd(host->mmc);
if (ret > 0)
present = true;
/*
* Firstly check card presence from cd-gpio. The return could
* be one of the following possibilities:
* negative: cd-gpio is not available
* zero: cd-gpio is used, and card is removed
* one: cd-gpio is used, and card is present
*/
present = mmc_gpio_get_cd(host->mmc);
if (present < 0) {
/* If polling, assume that the card is always present. */
if (host->quirks & SDHCI_QUIRK_BROKEN_CARD_DETECTION)
present = 1;
else
present = sdhci_readl(host, SDHCI_PRESENT_STATE) &
SDHCI_CARD_PRESENT;
}
if (!present || host->flags & SDHCI_DEVICE_DEAD) {
......@@ -1364,6 +1417,10 @@ static void sdhci_do_set_ios(struct sdhci_host *host, struct mmc_ios *ios)
sdhci_reinit(host);
}
if (host->version >= SDHCI_SPEC_300 &&
(ios->power_mode == MMC_POWER_UP))
sdhci_enable_preset_value(host, false);
sdhci_set_clock(host, ios->clock);
if (ios->power_mode == MMC_POWER_OFF)
......@@ -1383,11 +1440,11 @@ static void sdhci_do_set_ios(struct sdhci_host *host, struct mmc_ios *ios)
/*
* If your platform has 8-bit width support but is not a v3 controller,
* or if it requires special setup code, you should implement that in
* platform_8bit_width().
* platform_bus_width().
*/
if (host->ops->platform_8bit_width)
host->ops->platform_8bit_width(host, ios->bus_width);
else {
if (host->ops->platform_bus_width) {
host->ops->platform_bus_width(host, ios->bus_width);
} else {
ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
if (ios->bus_width == MMC_BUS_WIDTH_8) {
ctrl &= ~SDHCI_CTRL_4BITBUS;
......@@ -1415,7 +1472,6 @@ static void sdhci_do_set_ios(struct sdhci_host *host, struct mmc_ios *ios)
if (host->version >= SDHCI_SPEC_300) {
u16 clk, ctrl_2;
unsigned int clock;
/* In case of UHS-I modes, set High Speed Enable */
if ((ios->timing == MMC_TIMING_MMC_HS200) ||
......@@ -1455,9 +1511,7 @@ static void sdhci_do_set_ios(struct sdhci_host *host, struct mmc_ios *ios)
sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL);
/* Re-enable SD Clock */
clock = host->clock;
host->clock = 0;
sdhci_set_clock(host, clock);
sdhci_update_clock(host);
}
......@@ -1487,10 +1541,22 @@ static void sdhci_do_set_ios(struct sdhci_host *host, struct mmc_ios *ios)
sdhci_writew(host, ctrl_2, SDHCI_HOST_CONTROL2);
}
if (!(host->quirks2 & SDHCI_QUIRK2_PRESET_VALUE_BROKEN) &&
((ios->timing == MMC_TIMING_UHS_SDR12) ||
(ios->timing == MMC_TIMING_UHS_SDR25) ||
(ios->timing == MMC_TIMING_UHS_SDR50) ||
(ios->timing == MMC_TIMING_UHS_SDR104) ||
(ios->timing == MMC_TIMING_UHS_DDR50))) {
u16 preset;
sdhci_enable_preset_value(host, true);
preset = sdhci_get_preset_value(host);
ios->drv_type = (preset & SDHCI_PRESET_DRV_MASK)
>> SDHCI_PRESET_DRV_SHIFT;
}
/* Re-enable SD Clock */
clock = host->clock;
host->clock = 0;
sdhci_set_clock(host, clock);
sdhci_update_clock(host);
} else
sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL);
......@@ -1608,141 +1674,91 @@ static void sdhci_enable_sdio_irq(struct mmc_host *mmc, int enable)
spin_unlock_irqrestore(&host->lock, flags);
}
static int sdhci_do_3_3v_signal_voltage_switch(struct sdhci_host *host,
u16 ctrl)
static int sdhci_do_start_signal_voltage_switch(struct sdhci_host *host,
struct mmc_ios *ios)
{
u16 ctrl;
int ret;
/* Set 1.8V Signal Enable in the Host Control2 register to 0 */
ctrl &= ~SDHCI_CTRL_VDD_180;
sdhci_writew(host, ctrl, SDHCI_HOST_CONTROL2);
if (host->vqmmc) {
ret = regulator_set_voltage(host->vqmmc, 2700000, 3600000);
if (ret) {
pr_warning("%s: Switching to 3.3V signalling voltage "
" failed\n", mmc_hostname(host->mmc));
return -EIO;
}
}
/* Wait for 5ms */
usleep_range(5000, 5500);
/*
* Signal Voltage Switching is only applicable for Host Controllers
* v3.00 and above.
*/
if (host->version < SDHCI_SPEC_300)
return 0;
/* 3.3V regulator output should be stable within 5 ms */
ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
if (!(ctrl & SDHCI_CTRL_VDD_180))
return 0;
pr_warning("%s: 3.3V regulator output did not became stable\n",
mmc_hostname(host->mmc));
switch (ios->signal_voltage) {
case MMC_SIGNAL_VOLTAGE_330:
/* Set 1.8V Signal Enable in the Host Control2 register to 0 */
ctrl &= ~SDHCI_CTRL_VDD_180;
sdhci_writew(host, ctrl, SDHCI_HOST_CONTROL2);
return -EIO;
}
if (host->vqmmc) {
ret = regulator_set_voltage(host->vqmmc, 2700000, 3600000);
if (ret) {
pr_warning("%s: Switching to 3.3V signalling voltage "
" failed\n", mmc_hostname(host->mmc));
return -EIO;
}
}
/* Wait for 5ms */
usleep_range(5000, 5500);
static int sdhci_do_1_8v_signal_voltage_switch(struct sdhci_host *host,
u16 ctrl)
{
u8 pwr;
u16 clk;
u32 present_state;
int ret;
/* 3.3V regulator output should be stable within 5 ms */
ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
if (!(ctrl & SDHCI_CTRL_VDD_180))
return 0;
/* Stop SDCLK */
clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL);
clk &= ~SDHCI_CLOCK_CARD_EN;
sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL);
pr_warning("%s: 3.3V regulator output did not became stable\n",
mmc_hostname(host->mmc));
return -EAGAIN;
case MMC_SIGNAL_VOLTAGE_180:
if (host->vqmmc) {
ret = regulator_set_voltage(host->vqmmc,
1700000, 1950000);
if (ret) {
pr_warning("%s: Switching to 1.8V signalling voltage "
" failed\n", mmc_hostname(host->mmc));
return -EIO;
}
}
/* Check whether DAT[3:0] is 0000 */
present_state = sdhci_readl(host, SDHCI_PRESENT_STATE);
if (!((present_state & SDHCI_DATA_LVL_MASK) >>
SDHCI_DATA_LVL_SHIFT)) {
/*
* Enable 1.8V Signal Enable in the Host Control2
* register
*/
if (host->vqmmc)
ret = regulator_set_voltage(host->vqmmc,
1700000, 1950000);
else
ret = 0;
ctrl |= SDHCI_CTRL_VDD_180;
sdhci_writew(host, ctrl, SDHCI_HOST_CONTROL2);
if (!ret) {
ctrl |= SDHCI_CTRL_VDD_180;
sdhci_writew(host, ctrl, SDHCI_HOST_CONTROL2);
/* Wait for 5ms */
usleep_range(5000, 5500);
/* Wait for 5ms */
usleep_range(5000, 5500);
/* 1.8V regulator output should be stable within 5 ms */
ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
if (ctrl & SDHCI_CTRL_VDD_180)
return 0;
ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
if (ctrl & SDHCI_CTRL_VDD_180) {
/* Provide SDCLK again and wait for 1ms */
clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL);
clk |= SDHCI_CLOCK_CARD_EN;
sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL);
usleep_range(1000, 1500);
pr_warning("%s: 1.8V regulator output did not became stable\n",
mmc_hostname(host->mmc));
/*
* If DAT[3:0] level is 1111b, then the card
* was successfully switched to 1.8V signaling.
*/
present_state = sdhci_readl(host,
SDHCI_PRESENT_STATE);
if ((present_state & SDHCI_DATA_LVL_MASK) ==
SDHCI_DATA_LVL_MASK)
return 0;
return -EAGAIN;
case MMC_SIGNAL_VOLTAGE_120:
if (host->vqmmc) {
ret = regulator_set_voltage(host->vqmmc, 1100000, 1300000);
if (ret) {
pr_warning("%s: Switching to 1.2V signalling voltage "
" failed\n", mmc_hostname(host->mmc));
return -EIO;
}
}
}
/*
* If we are here, that means the switch to 1.8V signaling
* failed. We power cycle the card, and retry initialization
* sequence by setting S18R to 0.
*/
pwr = sdhci_readb(host, SDHCI_POWER_CONTROL);
pwr &= ~SDHCI_POWER_ON;
sdhci_writeb(host, pwr, SDHCI_POWER_CONTROL);
if (host->vmmc)
regulator_disable(host->vmmc);
/* Wait for 1ms as per the spec */
usleep_range(1000, 1500);
pwr |= SDHCI_POWER_ON;
sdhci_writeb(host, pwr, SDHCI_POWER_CONTROL);
if (host->vmmc)
regulator_enable(host->vmmc);
pr_warning("%s: Switching to 1.8V signalling voltage failed, "
"retrying with S18R set to 0\n", mmc_hostname(host->mmc));
return -EAGAIN;
}
static int sdhci_do_start_signal_voltage_switch(struct sdhci_host *host,
struct mmc_ios *ios)
{
u16 ctrl;
/*
* Signal Voltage Switching is only applicable for Host Controllers
* v3.00 and above.
*/
if (host->version < SDHCI_SPEC_300)
return 0;
/*
* We first check whether the request is to set signalling voltage
* to 3.3V. If so, we change the voltage to 3.3V and return quickly.
*/
ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_330)
return sdhci_do_3_3v_signal_voltage_switch(host, ctrl);
else if (!(ctrl & SDHCI_CTRL_VDD_180) &&
(ios->signal_voltage == MMC_SIGNAL_VOLTAGE_180))
return sdhci_do_1_8v_signal_voltage_switch(host, ctrl);
else
default:
/* No signal voltage switch required */
return 0;
}
}
static int sdhci_start_signal_voltage_switch(struct mmc_host *mmc,
......@@ -1759,6 +1775,19 @@ static int sdhci_start_signal_voltage_switch(struct mmc_host *mmc,
return err;
}
static int sdhci_card_busy(struct mmc_host *mmc)
{
struct sdhci_host *host = mmc_priv(mmc);
u32 present_state;
sdhci_runtime_pm_get(host);
/* Check whether DAT[3:0] is 0000 */
present_state = sdhci_readl(host, SDHCI_PRESENT_STATE);
sdhci_runtime_pm_put(host);
return !(present_state & SDHCI_DATA_LVL_MASK);
}
static int sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode)
{
struct sdhci_host *host;
......@@ -1955,17 +1984,15 @@ static int sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode)
return err;
}
static void sdhci_do_enable_preset_value(struct sdhci_host *host, bool enable)
static void sdhci_enable_preset_value(struct sdhci_host *host, bool enable)
{
u16 ctrl;
unsigned long flags;
/* Host Controller v3.00 defines preset value registers */
if (host->version < SDHCI_SPEC_300)
return;
spin_lock_irqsave(&host->lock, flags);
ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
/*
......@@ -1981,17 +2008,6 @@ static void sdhci_do_enable_preset_value(struct sdhci_host *host, bool enable)
sdhci_writew(host, ctrl, SDHCI_HOST_CONTROL2);
host->flags &= ~SDHCI_PV_ENABLED;
}
spin_unlock_irqrestore(&host->lock, flags);
}
static void sdhci_enable_preset_value(struct mmc_host *mmc, bool enable)
{
struct sdhci_host *host = mmc_priv(mmc);
sdhci_runtime_pm_get(host);
sdhci_do_enable_preset_value(host, enable);
sdhci_runtime_pm_put(host);
}
static void sdhci_card_event(struct mmc_host *mmc)
......@@ -2027,8 +2043,8 @@ static const struct mmc_host_ops sdhci_ops = {
.enable_sdio_irq = sdhci_enable_sdio_irq,
.start_signal_voltage_switch = sdhci_start_signal_voltage_switch,
.execute_tuning = sdhci_execute_tuning,
.enable_preset_value = sdhci_enable_preset_value,
.card_event = sdhci_card_event,
.card_busy = sdhci_card_busy,
};
/*****************************************************************************\
......@@ -2080,14 +2096,9 @@ static void sdhci_tasklet_finish(unsigned long param)
(host->quirks & SDHCI_QUIRK_RESET_AFTER_REQUEST))) {
/* Some controllers need this kick or reset won't work here */
if (host->quirks & SDHCI_QUIRK_CLOCK_BEFORE_RESET) {
unsigned int clock;
if (host->quirks & SDHCI_QUIRK_CLOCK_BEFORE_RESET)
/* This is to force an update */
clock = host->clock;
host->clock = 0;
sdhci_set_clock(host, clock);
}
sdhci_update_clock(host);
/* Spec says we should do both at the same time, but Ricoh
controllers do not like that. */
......@@ -2455,6 +2466,32 @@ static irqreturn_t sdhci_irq(int irq, void *dev_id)
\*****************************************************************************/
#ifdef CONFIG_PM
void sdhci_enable_irq_wakeups(struct sdhci_host *host)
{
u8 val;
u8 mask = SDHCI_WAKE_ON_INSERT | SDHCI_WAKE_ON_REMOVE
| SDHCI_WAKE_ON_INT;
val = sdhci_readb(host, SDHCI_WAKE_UP_CONTROL);
val |= mask ;
/* Avoid fake wake up */
if (host->quirks & SDHCI_QUIRK_BROKEN_CARD_DETECTION)
val &= ~(SDHCI_WAKE_ON_INSERT | SDHCI_WAKE_ON_REMOVE);
sdhci_writeb(host, val, SDHCI_WAKE_UP_CONTROL);
}
EXPORT_SYMBOL_GPL(sdhci_enable_irq_wakeups);
void sdhci_disable_irq_wakeups(struct sdhci_host *host)
{
u8 val;
u8 mask = SDHCI_WAKE_ON_INSERT | SDHCI_WAKE_ON_REMOVE
| SDHCI_WAKE_ON_INT;
val = sdhci_readb(host, SDHCI_WAKE_UP_CONTROL);
val &= ~mask;
sdhci_writeb(host, val, SDHCI_WAKE_UP_CONTROL);
}
EXPORT_SYMBOL_GPL(sdhci_disable_irq_wakeups);
int sdhci_suspend_host(struct sdhci_host *host)
{
......@@ -2484,8 +2521,13 @@ int sdhci_suspend_host(struct sdhci_host *host)
return ret;
}
free_irq(host->irq, host);
if (!device_may_wakeup(mmc_dev(host->mmc))) {
sdhci_mask_irqs(host, SDHCI_INT_ALL_MASK);
free_irq(host->irq, host);
} else {
sdhci_enable_irq_wakeups(host);
enable_irq_wake(host->irq);
}
return ret;
}
......@@ -2500,10 +2542,15 @@ int sdhci_resume_host(struct sdhci_host *host)
host->ops->enable_dma(host);
}
ret = request_irq(host->irq, sdhci_irq, IRQF_SHARED,
mmc_hostname(host->mmc), host);
if (ret)
return ret;
if (!device_may_wakeup(mmc_dev(host->mmc))) {
ret = request_irq(host->irq, sdhci_irq, IRQF_SHARED,
mmc_hostname(host->mmc), host);
if (ret)
return ret;
} else {
sdhci_disable_irq_wakeups(host);
disable_irq_wake(host->irq);
}
if ((host->mmc->pm_flags & MMC_PM_KEEP_POWER) &&
(host->quirks2 & SDHCI_QUIRK2_HOST_OFF_CARD_ON)) {
......@@ -2531,17 +2578,6 @@ int sdhci_resume_host(struct sdhci_host *host)
}
EXPORT_SYMBOL_GPL(sdhci_resume_host);
void sdhci_enable_irq_wakeups(struct sdhci_host *host)
{
u8 val;
val = sdhci_readb(host, SDHCI_WAKE_UP_CONTROL);
val |= SDHCI_WAKE_ON_INT;
sdhci_writeb(host, val, SDHCI_WAKE_UP_CONTROL);
}
EXPORT_SYMBOL_GPL(sdhci_enable_irq_wakeups);
#endif /* CONFIG_PM */
#ifdef CONFIG_PM_RUNTIME
......@@ -2600,8 +2636,12 @@ int sdhci_runtime_resume_host(struct sdhci_host *host)
sdhci_do_set_ios(host, &host->mmc->ios);
sdhci_do_start_signal_voltage_switch(host, &host->mmc->ios);
if (host_flags & SDHCI_PV_ENABLED)
sdhci_do_enable_preset_value(host, true);
if ((host_flags & SDHCI_PV_ENABLED) &&
!(host->quirks2 & SDHCI_QUIRK2_PRESET_VALUE_BROKEN)) {
spin_lock_irqsave(&host->lock, flags);
sdhci_enable_preset_value(host, true);
spin_unlock_irqrestore(&host->lock, flags);
}
/* Set the re-tuning expiration flag */
if (host->flags & SDHCI_USING_RETUNING_TIMER)
......@@ -2936,7 +2976,11 @@ int sdhci_add_host(struct sdhci_host *host)
}
#ifdef CONFIG_REGULATOR
if (host->vmmc) {
/*
* Voltage range check makes sense only if regulator reports
* any voltage value.
*/
if (host->vmmc && regulator_get_voltage(host->vmmc) > 0) {
ret = regulator_is_supported_voltage(host->vmmc, 2700000,
3600000);
if ((ret <= 0) || (!(caps[0] & SDHCI_CAN_VDD_330)))
......@@ -3139,6 +3183,7 @@ int sdhci_add_host(struct sdhci_host *host)
#ifdef SDHCI_USE_LEDS_CLASS
reset:
sdhci_reset(host, SDHCI_RESET_ALL);
sdhci_mask_irqs(host, SDHCI_INT_ALL_MASK);
free_irq(host->irq, host);
#endif
untasklet:
......@@ -3181,6 +3226,7 @@ void sdhci_remove_host(struct sdhci_host *host, int dead)
if (!dead)
sdhci_reset(host, SDHCI_RESET_ALL);
sdhci_mask_irqs(host, SDHCI_INT_ALL_MASK);
free_irq(host->irq, host);
del_timer_sync(&host->timer);
......
......@@ -229,6 +229,18 @@
/* 60-FB reserved */
#define SDHCI_PRESET_FOR_SDR12 0x66
#define SDHCI_PRESET_FOR_SDR25 0x68
#define SDHCI_PRESET_FOR_SDR50 0x6A
#define SDHCI_PRESET_FOR_SDR104 0x6C
#define SDHCI_PRESET_FOR_DDR50 0x6E
#define SDHCI_PRESET_DRV_MASK 0xC000
#define SDHCI_PRESET_DRV_SHIFT 14
#define SDHCI_PRESET_CLKGEN_SEL_MASK 0x400
#define SDHCI_PRESET_CLKGEN_SEL_SHIFT 10
#define SDHCI_PRESET_SDCLK_FREQ_MASK 0x3FF
#define SDHCI_PRESET_SDCLK_FREQ_SHIFT 0
#define SDHCI_SLOT_INT_STATUS 0xFC
#define SDHCI_HOST_VERSION 0xFE
......@@ -269,7 +281,7 @@ struct sdhci_ops {
unsigned int (*get_max_clock)(struct sdhci_host *host);
unsigned int (*get_min_clock)(struct sdhci_host *host);
unsigned int (*get_timeout_clock)(struct sdhci_host *host);
int (*platform_8bit_width)(struct sdhci_host *host,
int (*platform_bus_width)(struct sdhci_host *host,
int width);
void (*platform_send_init_74_clocks)(struct sdhci_host *host,
u8 power_mode);
......
......@@ -56,6 +56,7 @@
#include <linux/mmc/sh_mmcif.h>
#include <linux/mmc/slot-gpio.h>
#include <linux/mod_devicetable.h>
#include <linux/mutex.h>
#include <linux/pagemap.h>
#include <linux/platform_device.h>
#include <linux/pm_qos.h>
......@@ -88,6 +89,7 @@
#define CMD_SET_TBIT (1 << 7) /* 1: tran mission bit "Low" */
#define CMD_SET_OPDM (1 << 6) /* 1: open/drain */
#define CMD_SET_CCSH (1 << 5)
#define CMD_SET_DARS (1 << 2) /* Dual Data Rate */
#define CMD_SET_DATW_1 ((0 << 1) | (0 << 0)) /* 1bit */
#define CMD_SET_DATW_4 ((0 << 1) | (1 << 0)) /* 4bit */
#define CMD_SET_DATW_8 ((1 << 1) | (0 << 0)) /* 8bit */
......@@ -127,6 +129,10 @@
INT_CCSTO | INT_CRCSTO | INT_WDATTO | \
INT_RDATTO | INT_RBSYTO | INT_RSPTO)
#define INT_ALL (INT_RBSYE | INT_CRSPE | INT_BUFREN | \
INT_BUFWEN | INT_CMD12DRE | INT_BUFRE | \
INT_DTRANE | INT_CMD12RBE | INT_CMD12CRE)
/* CE_INT_MASK */
#define MASK_ALL 0x00000000
#define MASK_MCCSDE (1 << 29)
......@@ -158,6 +164,11 @@
MASK_MCCSTO | MASK_MCRCSTO | MASK_MWDATTO | \
MASK_MRDATTO | MASK_MRBSYTO | MASK_MRSPTO)
#define MASK_CLEAN (INT_ERR_STS | MASK_MRBSYE | MASK_MCRSPE | \
MASK_MBUFREN | MASK_MBUFWEN | \
MASK_MCMD12DRE | MASK_MBUFRE | MASK_MDTRANE | \
MASK_MCMD12RBE | MASK_MCMD12CRE)
/* CE_HOST_STS1 */
#define STS1_CMDSEQ (1 << 31)
......@@ -195,6 +206,7 @@ enum mmcif_state {
STATE_IDLE,
STATE_REQUEST,
STATE_IOS,
STATE_TIMEOUT,
};
enum mmcif_wait_for {
......@@ -216,6 +228,7 @@ struct sh_mmcif_host {
struct clk *hclk;
unsigned int clk;
int bus_width;
unsigned char timing;
bool sd_error;
bool dying;
long timeout;
......@@ -230,6 +243,7 @@ struct sh_mmcif_host {
int sg_blkidx;
bool power;
bool card_present;
struct mutex thread_lock;
/* DMA support */
struct dma_chan *chan_rx;
......@@ -253,23 +267,14 @@ static inline void sh_mmcif_bitclr(struct sh_mmcif_host *host,
static void mmcif_dma_complete(void *arg)
{
struct sh_mmcif_host *host = arg;
struct mmc_data *data = host->mrq->data;
struct mmc_request *mrq = host->mrq;
dev_dbg(&host->pd->dev, "Command completed\n");
if (WARN(!data, "%s: NULL data in DMA completion!\n",
if (WARN(!mrq || !mrq->data, "%s: NULL data in DMA completion!\n",
dev_name(&host->pd->dev)))
return;
if (data->flags & MMC_DATA_READ)
dma_unmap_sg(host->chan_rx->device->dev,
data->sg, data->sg_len,
DMA_FROM_DEVICE);
else
dma_unmap_sg(host->chan_tx->device->dev,
data->sg, data->sg_len,
DMA_TO_DEVICE);
complete(&host->dma_complete);
}
......@@ -423,8 +428,6 @@ static void sh_mmcif_request_dma(struct sh_mmcif_host *host,
if (ret < 0)
goto ecfgrx;
init_completion(&host->dma_complete);
return;
ecfgrx:
......@@ -520,13 +523,16 @@ static int sh_mmcif_error_manage(struct sh_mmcif_host *host)
}
if (state2 & STS2_CRC_ERR) {
dev_dbg(&host->pd->dev, ": CRC error\n");
dev_err(&host->pd->dev, " CRC error: state %u, wait %u\n",
host->state, host->wait_for);
ret = -EIO;
} else if (state2 & STS2_TIMEOUT_ERR) {
dev_dbg(&host->pd->dev, ": Timeout\n");
dev_err(&host->pd->dev, " Timeout: state %u, wait %u\n",
host->state, host->wait_for);
ret = -ETIMEDOUT;
} else {
dev_dbg(&host->pd->dev, ": End/Index error\n");
dev_dbg(&host->pd->dev, " End/Index error: state %u, wait %u\n",
host->state, host->wait_for);
ret = -EIO;
}
return ret;
......@@ -549,10 +555,7 @@ static bool sh_mmcif_next_block(struct sh_mmcif_host *host, u32 *p)
host->pio_ptr = p;
}
if (host->sg_idx == data->sg_len)
return false;
return true;
return host->sg_idx != data->sg_len;
}
static void sh_mmcif_single_read(struct sh_mmcif_host *host,
......@@ -562,7 +565,6 @@ static void sh_mmcif_single_read(struct sh_mmcif_host *host,
BLOCK_SIZE_MASK) + 3;
host->wait_for = MMCIF_WAIT_FOR_READ;
schedule_delayed_work(&host->timeout_work, host->timeout);
/* buf read enable */
sh_mmcif_bitset(host, MMCIF_CE_INT_MASK, MASK_MBUFREN);
......@@ -576,6 +578,7 @@ static bool sh_mmcif_read_block(struct sh_mmcif_host *host)
if (host->sd_error) {
data->error = sh_mmcif_error_manage(host);
dev_dbg(&host->pd->dev, "%s(): %d\n", __func__, data->error);
return false;
}
......@@ -604,7 +607,7 @@ static void sh_mmcif_multi_read(struct sh_mmcif_host *host,
host->sg_idx = 0;
host->sg_blkidx = 0;
host->pio_ptr = sg_virt(data->sg);
schedule_delayed_work(&host->timeout_work, host->timeout);
sh_mmcif_bitset(host, MMCIF_CE_INT_MASK, MASK_MBUFREN);
}
......@@ -616,6 +619,7 @@ static bool sh_mmcif_mread_block(struct sh_mmcif_host *host)
if (host->sd_error) {
data->error = sh_mmcif_error_manage(host);
dev_dbg(&host->pd->dev, "%s(): %d\n", __func__, data->error);
return false;
}
......@@ -627,7 +631,6 @@ static bool sh_mmcif_mread_block(struct sh_mmcif_host *host)
if (!sh_mmcif_next_block(host, p))
return false;
schedule_delayed_work(&host->timeout_work, host->timeout);
sh_mmcif_bitset(host, MMCIF_CE_INT_MASK, MASK_MBUFREN);
return true;
......@@ -640,7 +643,6 @@ static void sh_mmcif_single_write(struct sh_mmcif_host *host,
BLOCK_SIZE_MASK) + 3;
host->wait_for = MMCIF_WAIT_FOR_WRITE;
schedule_delayed_work(&host->timeout_work, host->timeout);
/* buf write enable */
sh_mmcif_bitset(host, MMCIF_CE_INT_MASK, MASK_MBUFWEN);
......@@ -654,6 +656,7 @@ static bool sh_mmcif_write_block(struct sh_mmcif_host *host)
if (host->sd_error) {
data->error = sh_mmcif_error_manage(host);
dev_dbg(&host->pd->dev, "%s(): %d\n", __func__, data->error);
return false;
}
......@@ -682,7 +685,7 @@ static void sh_mmcif_multi_write(struct sh_mmcif_host *host,
host->sg_idx = 0;
host->sg_blkidx = 0;
host->pio_ptr = sg_virt(data->sg);
schedule_delayed_work(&host->timeout_work, host->timeout);
sh_mmcif_bitset(host, MMCIF_CE_INT_MASK, MASK_MBUFWEN);
}
......@@ -694,6 +697,7 @@ static bool sh_mmcif_mwrite_block(struct sh_mmcif_host *host)
if (host->sd_error) {
data->error = sh_mmcif_error_manage(host);
dev_dbg(&host->pd->dev, "%s(): %d\n", __func__, data->error);
return false;
}
......@@ -705,7 +709,6 @@ static bool sh_mmcif_mwrite_block(struct sh_mmcif_host *host)
if (!sh_mmcif_next_block(host, p))
return false;
schedule_delayed_work(&host->timeout_work, host->timeout);
sh_mmcif_bitset(host, MMCIF_CE_INT_MASK, MASK_MBUFWEN);
return true;
......@@ -756,6 +759,7 @@ static u32 sh_mmcif_set_cmd(struct sh_mmcif_host *host,
}
switch (opc) {
/* RBSY */
case MMC_SLEEP_AWAKE:
case MMC_SWITCH:
case MMC_STOP_TRANSMISSION:
case MMC_SET_WRITE_PROT:
......@@ -781,6 +785,17 @@ static u32 sh_mmcif_set_cmd(struct sh_mmcif_host *host,
dev_err(&host->pd->dev, "Unsupported bus width.\n");
break;
}
switch (host->timing) {
case MMC_TIMING_UHS_DDR50:
/*
* MMC core will only set this timing, if the host
* advertises the MMC_CAP_UHS_DDR50 capability. MMCIF
* implementations with this capability, e.g. sh73a0,
* will have to set it in their platform data.
*/
tmp |= CMD_SET_DARS;
break;
}
}
/* DWEN */
if (opc == MMC_WRITE_BLOCK || opc == MMC_WRITE_MULTIPLE_BLOCK)
......@@ -824,7 +839,7 @@ static int sh_mmcif_data_trans(struct sh_mmcif_host *host,
sh_mmcif_single_read(host, mrq);
return 0;
default:
dev_err(&host->pd->dev, "UNSUPPORTED CMD = d'%08d\n", opc);
dev_err(&host->pd->dev, "Unsupported CMD%d\n", opc);
return -EINVAL;
}
}
......@@ -838,6 +853,7 @@ static void sh_mmcif_start_cmd(struct sh_mmcif_host *host,
switch (opc) {
/* response busy check */
case MMC_SLEEP_AWAKE:
case MMC_SWITCH:
case MMC_STOP_TRANSMISSION:
case MMC_SET_WRITE_PROT:
......@@ -885,7 +901,6 @@ static void sh_mmcif_stop_cmd(struct sh_mmcif_host *host,
}
host->wait_for = MMCIF_WAIT_FOR_STOP;
schedule_delayed_work(&host->timeout_work, host->timeout);
}
static void sh_mmcif_request(struct mmc_host *mmc, struct mmc_request *mrq)
......@@ -895,6 +910,7 @@ static void sh_mmcif_request(struct mmc_host *mmc, struct mmc_request *mrq)
spin_lock_irqsave(&host->lock, flags);
if (host->state != STATE_IDLE) {
dev_dbg(&host->pd->dev, "%s() rejected, state %u\n", __func__, host->state);
spin_unlock_irqrestore(&host->lock, flags);
mrq->cmd->error = -EAGAIN;
mmc_request_done(mmc, mrq);
......@@ -911,6 +927,7 @@ static void sh_mmcif_request(struct mmc_host *mmc, struct mmc_request *mrq)
if ((mrq->cmd->flags & MMC_CMD_MASK) != MMC_CMD_BCR)
break;
case MMC_APP_CMD:
case SD_IO_RW_DIRECT:
host->state = STATE_IDLE;
mrq->cmd->error = -ETIMEDOUT;
mmc_request_done(mmc, mrq);
......@@ -957,6 +974,7 @@ static void sh_mmcif_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
spin_lock_irqsave(&host->lock, flags);
if (host->state != STATE_IDLE) {
dev_dbg(&host->pd->dev, "%s() rejected, state %u\n", __func__, host->state);
spin_unlock_irqrestore(&host->lock, flags);
return;
}
......@@ -981,7 +999,7 @@ static void sh_mmcif_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
}
}
if (host->power) {
pm_runtime_put(&host->pd->dev);
pm_runtime_put_sync(&host->pd->dev);
clk_disable(host->hclk);
host->power = false;
if (ios->power_mode == MMC_POWER_OFF)
......@@ -1001,6 +1019,7 @@ static void sh_mmcif_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
sh_mmcif_clock_control(host, ios->clock);
}
host->timing = ios->timing;
host->bus_width = ios->bus_width;
host->state = STATE_IDLE;
}
......@@ -1038,14 +1057,14 @@ static bool sh_mmcif_end_cmd(struct sh_mmcif_host *host)
case MMC_SELECT_CARD:
case MMC_APP_CMD:
cmd->error = -ETIMEDOUT;
host->sd_error = false;
break;
default:
cmd->error = sh_mmcif_error_manage(host);
dev_dbg(&host->pd->dev, "Cmd(d'%d) error %d\n",
cmd->opcode, cmd->error);
break;
}
dev_dbg(&host->pd->dev, "CMD%d error %d\n",
cmd->opcode, cmd->error);
host->sd_error = false;
return false;
}
if (!(cmd->flags & MMC_RSP_PRESENT)) {
......@@ -1058,6 +1077,12 @@ static bool sh_mmcif_end_cmd(struct sh_mmcif_host *host)
if (!data)
return false;
/*
* Completion can be signalled from DMA callback and error, so, have to
* reset here, before setting .dma_active
*/
init_completion(&host->dma_complete);
if (data->flags & MMC_DATA_READ) {
if (host->chan_rx)
sh_mmcif_start_dma_rx(host);
......@@ -1068,34 +1093,47 @@ static bool sh_mmcif_end_cmd(struct sh_mmcif_host *host)
if (!host->dma_active) {
data->error = sh_mmcif_data_trans(host, host->mrq, cmd->opcode);
if (!data->error)
return true;
return false;
return !data->error;
}
/* Running in the IRQ thread, can sleep */
time = wait_for_completion_interruptible_timeout(&host->dma_complete,
host->timeout);
if (data->flags & MMC_DATA_READ)
dma_unmap_sg(host->chan_rx->device->dev,
data->sg, data->sg_len,
DMA_FROM_DEVICE);
else
dma_unmap_sg(host->chan_tx->device->dev,
data->sg, data->sg_len,
DMA_TO_DEVICE);
if (host->sd_error) {
dev_err(host->mmc->parent,
"Error IRQ while waiting for DMA completion!\n");
/* Woken up by an error IRQ: abort DMA */
if (data->flags & MMC_DATA_READ)
dmaengine_terminate_all(host->chan_rx);
else
dmaengine_terminate_all(host->chan_tx);
data->error = sh_mmcif_error_manage(host);
} else if (!time) {
dev_err(host->mmc->parent, "DMA timeout!\n");
data->error = -ETIMEDOUT;
} else if (time < 0) {
dev_err(host->mmc->parent,
"wait_for_completion_...() error %ld!\n", time);
data->error = time;
}
sh_mmcif_bitclr(host, MMCIF_CE_BUF_ACC,
BUF_ACC_DMAREN | BUF_ACC_DMAWEN);
host->dma_active = false;
if (data->error)
if (data->error) {
data->bytes_xfered = 0;
/* Abort DMA */
if (data->flags & MMC_DATA_READ)
dmaengine_terminate_all(host->chan_rx);
else
dmaengine_terminate_all(host->chan_tx);
}
return false;
}
......@@ -1103,10 +1141,21 @@ static bool sh_mmcif_end_cmd(struct sh_mmcif_host *host)
static irqreturn_t sh_mmcif_irqt(int irq, void *dev_id)
{
struct sh_mmcif_host *host = dev_id;
struct mmc_request *mrq = host->mrq;
struct mmc_request *mrq;
bool wait = false;
cancel_delayed_work_sync(&host->timeout_work);
mutex_lock(&host->thread_lock);
mrq = host->mrq;
if (!mrq) {
dev_dbg(&host->pd->dev, "IRQ thread state %u, wait %u: NULL mrq!\n",
host->state, host->wait_for);
mutex_unlock(&host->thread_lock);
return IRQ_HANDLED;
}
/*
* All handlers return true, if processing continues, and false, if the
* request has to be completed - successfully or not
......@@ -1114,35 +1163,32 @@ static irqreturn_t sh_mmcif_irqt(int irq, void *dev_id)
switch (host->wait_for) {
case MMCIF_WAIT_FOR_REQUEST:
/* We're too late, the timeout has already kicked in */
mutex_unlock(&host->thread_lock);
return IRQ_HANDLED;
case MMCIF_WAIT_FOR_CMD:
if (sh_mmcif_end_cmd(host))
/* Wait for data */
return IRQ_HANDLED;
/* Wait for data? */
wait = sh_mmcif_end_cmd(host);
break;
case MMCIF_WAIT_FOR_MREAD:
if (sh_mmcif_mread_block(host))
/* Wait for more data */
return IRQ_HANDLED;
/* Wait for more data? */
wait = sh_mmcif_mread_block(host);
break;
case MMCIF_WAIT_FOR_READ:
if (sh_mmcif_read_block(host))
/* Wait for data end */
return IRQ_HANDLED;
/* Wait for data end? */
wait = sh_mmcif_read_block(host);
break;
case MMCIF_WAIT_FOR_MWRITE:
if (sh_mmcif_mwrite_block(host))
/* Wait data to write */
return IRQ_HANDLED;
/* Wait data to write? */
wait = sh_mmcif_mwrite_block(host);
break;
case MMCIF_WAIT_FOR_WRITE:
if (sh_mmcif_write_block(host))
/* Wait for data end */
return IRQ_HANDLED;
/* Wait for data end? */
wait = sh_mmcif_write_block(host);
break;
case MMCIF_WAIT_FOR_STOP:
if (host->sd_error) {
mrq->stop->error = sh_mmcif_error_manage(host);
dev_dbg(&host->pd->dev, "%s(): %d\n", __func__, mrq->stop->error);
break;
}
sh_mmcif_get_cmd12response(host, mrq->stop);
......@@ -1150,13 +1196,22 @@ static irqreturn_t sh_mmcif_irqt(int irq, void *dev_id)
break;
case MMCIF_WAIT_FOR_READ_END:
case MMCIF_WAIT_FOR_WRITE_END:
if (host->sd_error)
if (host->sd_error) {
mrq->data->error = sh_mmcif_error_manage(host);
dev_dbg(&host->pd->dev, "%s(): %d\n", __func__, mrq->data->error);
}
break;
default:
BUG();
}
if (wait) {
schedule_delayed_work(&host->timeout_work, host->timeout);
/* Wait for more data */
mutex_unlock(&host->thread_lock);
return IRQ_HANDLED;
}
if (host->wait_for != MMCIF_WAIT_FOR_STOP) {
struct mmc_data *data = mrq->data;
if (!mrq->cmd->error && data && !data->error)
......@@ -1165,8 +1220,11 @@ static irqreturn_t sh_mmcif_irqt(int irq, void *dev_id)
if (mrq->stop && !mrq->cmd->error && (!data || !data->error)) {
sh_mmcif_stop_cmd(host, mrq);
if (!mrq->stop->error)
if (!mrq->stop->error) {
schedule_delayed_work(&host->timeout_work, host->timeout);
mutex_unlock(&host->thread_lock);
return IRQ_HANDLED;
}
}
}
......@@ -1175,6 +1233,8 @@ static irqreturn_t sh_mmcif_irqt(int irq, void *dev_id)
host->mrq = NULL;
mmc_request_done(host->mmc, mrq);
mutex_unlock(&host->thread_lock);
return IRQ_HANDLED;
}
......@@ -1182,56 +1242,22 @@ static irqreturn_t sh_mmcif_intr(int irq, void *dev_id)
{
struct sh_mmcif_host *host = dev_id;
u32 state;
int err = 0;
state = sh_mmcif_readl(host->addr, MMCIF_CE_INT);
sh_mmcif_writel(host->addr, MMCIF_CE_INT, ~state);
sh_mmcif_bitclr(host, MMCIF_CE_INT_MASK, state & MASK_CLEAN);
if (state & INT_ERR_STS) {
/* error interrupts - process first */
sh_mmcif_writel(host->addr, MMCIF_CE_INT, ~state);
sh_mmcif_bitclr(host, MMCIF_CE_INT_MASK, state);
err = 1;
} else if (state & INT_RBSYE) {
sh_mmcif_writel(host->addr, MMCIF_CE_INT,
~(INT_RBSYE | INT_CRSPE));
sh_mmcif_bitclr(host, MMCIF_CE_INT_MASK, MASK_MRBSYE);
} else if (state & INT_CRSPE) {
sh_mmcif_writel(host->addr, MMCIF_CE_INT, ~INT_CRSPE);
sh_mmcif_bitclr(host, MMCIF_CE_INT_MASK, MASK_MCRSPE);
} else if (state & INT_BUFREN) {
sh_mmcif_writel(host->addr, MMCIF_CE_INT, ~INT_BUFREN);
sh_mmcif_bitclr(host, MMCIF_CE_INT_MASK, MASK_MBUFREN);
} else if (state & INT_BUFWEN) {
sh_mmcif_writel(host->addr, MMCIF_CE_INT, ~INT_BUFWEN);
sh_mmcif_bitclr(host, MMCIF_CE_INT_MASK, MASK_MBUFWEN);
} else if (state & INT_CMD12DRE) {
sh_mmcif_writel(host->addr, MMCIF_CE_INT,
~(INT_CMD12DRE | INT_CMD12RBE |
INT_CMD12CRE | INT_BUFRE));
sh_mmcif_bitclr(host, MMCIF_CE_INT_MASK, MASK_MCMD12DRE);
} else if (state & INT_BUFRE) {
sh_mmcif_writel(host->addr, MMCIF_CE_INT, ~INT_BUFRE);
sh_mmcif_bitclr(host, MMCIF_CE_INT_MASK, MASK_MBUFRE);
} else if (state & INT_DTRANE) {
sh_mmcif_writel(host->addr, MMCIF_CE_INT,
~(INT_CMD12DRE | INT_CMD12RBE |
INT_CMD12CRE | INT_DTRANE));
sh_mmcif_bitclr(host, MMCIF_CE_INT_MASK, MASK_MDTRANE);
} else if (state & INT_CMD12RBE) {
sh_mmcif_writel(host->addr, MMCIF_CE_INT,
~(INT_CMD12RBE | INT_CMD12CRE));
sh_mmcif_bitclr(host, MMCIF_CE_INT_MASK, MASK_MCMD12RBE);
} else {
dev_dbg(&host->pd->dev, "Unsupported interrupt: 0x%x\n", state);
sh_mmcif_writel(host->addr, MMCIF_CE_INT, ~state);
sh_mmcif_bitclr(host, MMCIF_CE_INT_MASK, state);
err = 1;
}
if (err) {
if (state & ~MASK_CLEAN)
dev_dbg(&host->pd->dev, "IRQ state = 0x%08x incompletely cleared\n",
state);
if (state & INT_ERR_STS || state & ~INT_ALL) {
host->sd_error = true;
dev_dbg(&host->pd->dev, "int err state = %08x\n", state);
dev_dbg(&host->pd->dev, "int err state = 0x%08x\n", state);
}
if (state & ~(INT_CMD12RBE | INT_CMD12CRE)) {
if (!host->mrq)
dev_dbg(&host->pd->dev, "NULL IRQ state = 0x%08x\n", state);
if (!host->dma_active)
return IRQ_WAKE_THREAD;
else if (host->sd_error)
......@@ -1248,11 +1274,24 @@ static void mmcif_timeout_work(struct work_struct *work)
struct delayed_work *d = container_of(work, struct delayed_work, work);
struct sh_mmcif_host *host = container_of(d, struct sh_mmcif_host, timeout_work);
struct mmc_request *mrq = host->mrq;
unsigned long flags;
if (host->dying)
/* Don't run after mmc_remove_host() */
return;
dev_err(&host->pd->dev, "Timeout waiting for %u on CMD%u\n",
host->wait_for, mrq->cmd->opcode);
spin_lock_irqsave(&host->lock, flags);
if (host->state == STATE_IDLE) {
spin_unlock_irqrestore(&host->lock, flags);
return;
}
host->state = STATE_TIMEOUT;
spin_unlock_irqrestore(&host->lock, flags);
/*
* Handle races with cancel_delayed_work(), unless
* cancel_delayed_work_sync() is used
......@@ -1306,10 +1345,11 @@ static int sh_mmcif_probe(struct platform_device *pdev)
struct sh_mmcif_plat_data *pd = pdev->dev.platform_data;
struct resource *res;
void __iomem *reg;
const char *name;
irq[0] = platform_get_irq(pdev, 0);
irq[1] = platform_get_irq(pdev, 1);
if (irq[0] < 0 || irq[1] < 0) {
if (irq[0] < 0) {
dev_err(&pdev->dev, "Get irq error\n");
return -ENXIO;
}
......@@ -1329,10 +1369,11 @@ static int sh_mmcif_probe(struct platform_device *pdev)
ret = -ENOMEM;
goto ealloch;
}
mmc_of_parse(mmc);
host = mmc_priv(mmc);
host->mmc = mmc;
host->addr = reg;
host->timeout = 1000;
host->timeout = msecs_to_jiffies(1000);
host->pd = pdev;
......@@ -1341,7 +1382,7 @@ static int sh_mmcif_probe(struct platform_device *pdev)
mmc->ops = &sh_mmcif_ops;
sh_mmcif_init_ocr(host);
mmc->caps = MMC_CAP_MMC_HIGHSPEED;
mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_WAIT_WHILE_BUSY;
if (pd && pd->caps)
mmc->caps |= pd->caps;
mmc->max_segs = 32;
......@@ -1374,15 +1415,19 @@ static int sh_mmcif_probe(struct platform_device *pdev)
sh_mmcif_sync_reset(host);
sh_mmcif_writel(host->addr, MMCIF_CE_INT_MASK, MASK_ALL);
ret = request_threaded_irq(irq[0], sh_mmcif_intr, sh_mmcif_irqt, 0, "sh_mmc:error", host);
name = irq[1] < 0 ? dev_name(&pdev->dev) : "sh_mmc:error";
ret = request_threaded_irq(irq[0], sh_mmcif_intr, sh_mmcif_irqt, 0, name, host);
if (ret) {
dev_err(&pdev->dev, "request_irq error (sh_mmc:error)\n");
dev_err(&pdev->dev, "request_irq error (%s)\n", name);
goto ereqirq0;
}
ret = request_threaded_irq(irq[1], sh_mmcif_intr, sh_mmcif_irqt, 0, "sh_mmc:int", host);
if (ret) {
dev_err(&pdev->dev, "request_irq error (sh_mmc:int)\n");
goto ereqirq1;
if (irq[1] >= 0) {
ret = request_threaded_irq(irq[1], sh_mmcif_intr, sh_mmcif_irqt,
0, "sh_mmc:int", host);
if (ret) {
dev_err(&pdev->dev, "request_irq error (sh_mmc:int)\n");
goto ereqirq1;
}
}
if (pd && pd->use_cd_gpio) {
......@@ -1391,6 +1436,8 @@ static int sh_mmcif_probe(struct platform_device *pdev)
goto erqcd;
}
mutex_init(&host->thread_lock);
clk_disable(host->hclk);
ret = mmc_add_host(mmc);
if (ret < 0)
......@@ -1404,10 +1451,9 @@ static int sh_mmcif_probe(struct platform_device *pdev)
return ret;
emmcaddh:
if (pd && pd->use_cd_gpio)
mmc_gpio_free_cd(mmc);
erqcd:
free_irq(irq[1], host);
if (irq[1] >= 0)
free_irq(irq[1], host);
ereqirq1:
free_irq(irq[0], host);
ereqirq0:
......@@ -1427,7 +1473,6 @@ static int sh_mmcif_probe(struct platform_device *pdev)
static int sh_mmcif_remove(struct platform_device *pdev)
{
struct sh_mmcif_host *host = platform_get_drvdata(pdev);
struct sh_mmcif_plat_data *pd = pdev->dev.platform_data;
int irq[2];
host->dying = true;
......@@ -1436,9 +1481,6 @@ static int sh_mmcif_remove(struct platform_device *pdev)
dev_pm_qos_hide_latency_limit(&pdev->dev);
if (pd && pd->use_cd_gpio)
mmc_gpio_free_cd(host->mmc);
mmc_remove_host(host->mmc);
sh_mmcif_writel(host->addr, MMCIF_CE_INT_MASK, MASK_ALL);
......@@ -1456,7 +1498,8 @@ static int sh_mmcif_remove(struct platform_device *pdev)
irq[1] = platform_get_irq(pdev, 1);
free_irq(irq[0], host);
free_irq(irq[1], host);
if (irq[1] >= 0)
free_irq(irq[1], host);
platform_set_drvdata(pdev, NULL);
......
......@@ -23,6 +23,7 @@
#include <linux/slab.h>
#include <linux/mod_devicetable.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/mmc/host.h>
#include <linux/mmc/sh_mobile_sdhi.h>
......@@ -32,6 +33,16 @@
#include "tmio_mmc.h"
struct sh_mobile_sdhi_of_data {
unsigned long tmio_flags;
};
static const struct sh_mobile_sdhi_of_data sh_mobile_sdhi_of_cfg[] = {
{
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT,
},
};
struct sh_mobile_sdhi {
struct clk *clk;
struct tmio_mmc_data mmc_data;
......@@ -117,8 +128,18 @@ static const struct sh_mobile_sdhi_ops sdhi_ops = {
.cd_wakeup = sh_mobile_sdhi_cd_wakeup,
};
static const struct of_device_id sh_mobile_sdhi_of_match[] = {
{ .compatible = "renesas,shmobile-sdhi" },
{ .compatible = "renesas,sh7372-sdhi" },
{ .compatible = "renesas,r8a7740-sdhi", .data = &sh_mobile_sdhi_of_cfg[0], },
{},
};
MODULE_DEVICE_TABLE(of, sh_mobile_sdhi_of_match);
static int sh_mobile_sdhi_probe(struct platform_device *pdev)
{
const struct of_device_id *of_id =
of_match_device(sh_mobile_sdhi_of_match, &pdev->dev);
struct sh_mobile_sdhi *priv;
struct tmio_mmc_data *mmc_data;
struct sh_mobile_sdhi_info *p = pdev->dev.platform_data;
......@@ -126,7 +147,7 @@ static int sh_mobile_sdhi_probe(struct platform_device *pdev)
int irq, ret, i = 0;
bool multiplexed_isr = true;
priv = kzalloc(sizeof(struct sh_mobile_sdhi), GFP_KERNEL);
priv = devm_kzalloc(&pdev->dev, sizeof(struct sh_mobile_sdhi), GFP_KERNEL);
if (priv == NULL) {
dev_err(&pdev->dev, "kzalloc failed\n");
return -ENOMEM;
......@@ -135,15 +156,14 @@ static int sh_mobile_sdhi_probe(struct platform_device *pdev)
mmc_data = &priv->mmc_data;
if (p) {
p->pdata = mmc_data;
if (p->init) {
ret = p->init(pdev, &sdhi_ops);
if (ret)
goto einit;
return ret;
}
}
priv->clk = clk_get(&pdev->dev, NULL);
priv->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(priv->clk)) {
ret = PTR_ERR(priv->clk);
dev_err(&pdev->dev, "cannot get clock: %d\n", ret);
......@@ -153,10 +173,9 @@ static int sh_mobile_sdhi_probe(struct platform_device *pdev)
mmc_data->clk_enable = sh_mobile_sdhi_clk_enable;
mmc_data->clk_disable = sh_mobile_sdhi_clk_disable;
mmc_data->capabilities = MMC_CAP_MMC_HIGHSPEED;
mmc_data->write16_hook = sh_mobile_sdhi_write16_hook;
if (p) {
mmc_data->flags = p->tmio_flags;
if (mmc_data->flags & TMIO_MMC_HAS_IDLE_WAIT)
mmc_data->write16_hook = sh_mobile_sdhi_write16_hook;
mmc_data->ocr_mask = p->tmio_ocr_mask;
mmc_data->capabilities |= p->tmio_caps;
mmc_data->capabilities2 |= p->tmio_caps2;
......@@ -187,6 +206,11 @@ static int sh_mobile_sdhi_probe(struct platform_device *pdev)
*/
mmc_data->flags |= TMIO_MMC_SDIO_IRQ;
if (of_id && of_id->data) {
const struct sh_mobile_sdhi_of_data *of_data = of_id->data;
mmc_data->flags |= of_data->tmio_flags;
}
ret = tmio_mmc_host_probe(&host, pdev, mmc_data);
if (ret < 0)
goto eprobe;
......@@ -199,33 +223,33 @@ static int sh_mobile_sdhi_probe(struct platform_device *pdev)
irq = platform_get_irq_byname(pdev, SH_MOBILE_SDHI_IRQ_CARD_DETECT);
if (irq >= 0) {
multiplexed_isr = false;
ret = request_irq(irq, tmio_mmc_card_detect_irq, 0,
ret = devm_request_irq(&pdev->dev, irq, tmio_mmc_card_detect_irq, 0,
dev_name(&pdev->dev), host);
if (ret)
goto eirq_card_detect;
goto eirq;
}
irq = platform_get_irq_byname(pdev, SH_MOBILE_SDHI_IRQ_SDIO);
if (irq >= 0) {
multiplexed_isr = false;
ret = request_irq(irq, tmio_mmc_sdio_irq, 0,
ret = devm_request_irq(&pdev->dev, irq, tmio_mmc_sdio_irq, 0,
dev_name(&pdev->dev), host);
if (ret)
goto eirq_sdio;
goto eirq;
}
irq = platform_get_irq_byname(pdev, SH_MOBILE_SDHI_IRQ_SDCARD);
if (irq >= 0) {
multiplexed_isr = false;
ret = request_irq(irq, tmio_mmc_sdcard_irq, 0,
ret = devm_request_irq(&pdev->dev, irq, tmio_mmc_sdcard_irq, 0,
dev_name(&pdev->dev), host);
if (ret)
goto eirq_sdcard;
goto eirq;
} else if (!multiplexed_isr) {
dev_err(&pdev->dev,
"Principal SD-card IRQ is missing among named interrupts\n");
ret = irq;
goto eirq_sdcard;
goto eirq;
}
if (multiplexed_isr) {
......@@ -234,15 +258,15 @@ static int sh_mobile_sdhi_probe(struct platform_device *pdev)
if (irq < 0)
break;
i++;
ret = request_irq(irq, tmio_mmc_irq, 0,
ret = devm_request_irq(&pdev->dev, irq, tmio_mmc_irq, 0,
dev_name(&pdev->dev), host);
if (ret)
goto eirq_multiplexed;
goto eirq;
}
/* There must be at least one IRQ source */
if (!i)
goto eirq_multiplexed;
goto eirq;
}
dev_info(&pdev->dev, "%s base at 0x%08lx clock rate %u MHz\n",
......@@ -252,28 +276,12 @@ static int sh_mobile_sdhi_probe(struct platform_device *pdev)
return ret;
eirq_multiplexed:
while (i--) {
irq = platform_get_irq(pdev, i);
free_irq(irq, host);
}
eirq_sdcard:
irq = platform_get_irq_byname(pdev, SH_MOBILE_SDHI_IRQ_SDIO);
if (irq >= 0)
free_irq(irq, host);
eirq_sdio:
irq = platform_get_irq_byname(pdev, SH_MOBILE_SDHI_IRQ_CARD_DETECT);
if (irq >= 0)
free_irq(irq, host);
eirq_card_detect:
eirq:
tmio_mmc_host_remove(host);
eprobe:
clk_put(priv->clk);
eclkget:
if (p && p->cleanup)
p->cleanup(pdev);
einit:
kfree(priv);
return ret;
}
......@@ -281,29 +289,13 @@ static int sh_mobile_sdhi_remove(struct platform_device *pdev)
{
struct mmc_host *mmc = platform_get_drvdata(pdev);
struct tmio_mmc_host *host = mmc_priv(mmc);
struct sh_mobile_sdhi *priv = container_of(host->pdata, struct sh_mobile_sdhi, mmc_data);
struct sh_mobile_sdhi_info *p = pdev->dev.platform_data;
int i = 0, irq;
if (p)
p->pdata = NULL;
tmio_mmc_host_remove(host);
while (1) {
irq = platform_get_irq(pdev, i++);
if (irq < 0)
break;
free_irq(irq, host);
}
clk_put(priv->clk);
if (p && p->cleanup)
p->cleanup(pdev);
kfree(priv);
return 0;
}
......@@ -314,12 +306,6 @@ static const struct dev_pm_ops tmio_mmc_dev_pm_ops = {
.runtime_resume = tmio_mmc_host_runtime_resume,
};
static const struct of_device_id sh_mobile_sdhi_of_match[] = {
{ .compatible = "renesas,shmobile-sdhi" },
{ }
};
MODULE_DEVICE_TABLE(of, sh_mobile_sdhi_of_match);
static struct platform_driver sh_mobile_sdhi_driver = {
.driver = {
.name = "sh_mobile_sdhi",
......
......@@ -43,6 +43,7 @@
#include <linux/platform_device.h>
#include <linux/pm_qos.h>
#include <linux/pm_runtime.h>
#include <linux/regulator/consumer.h>
#include <linux/scatterlist.h>
#include <linux/spinlock.h>
#include <linux/workqueue.h>
......@@ -155,6 +156,7 @@ static void tmio_mmc_set_clock(struct tmio_mmc_host *host, int new_clock)
host->set_clk_div(host->pdev, (clk>>22) & 1);
sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, clk & 0x1ff);
msleep(10);
}
static void tmio_mmc_clk_stop(struct tmio_mmc_host *host)
......@@ -768,16 +770,48 @@ static int tmio_mmc_clk_update(struct mmc_host *mmc)
return ret;
}
static void tmio_mmc_set_power(struct tmio_mmc_host *host, struct mmc_ios *ios)
static void tmio_mmc_power_on(struct tmio_mmc_host *host, unsigned short vdd)
{
struct mmc_host *mmc = host->mmc;
int ret = 0;
/* .set_ios() is returning void, so, no chance to report an error */
if (host->set_pwr)
host->set_pwr(host->pdev, ios->power_mode != MMC_POWER_OFF);
host->set_pwr(host->pdev, 1);
if (!IS_ERR(mmc->supply.vmmc)) {
ret = mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd);
/*
* Attention: empiric value. With a b43 WiFi SDIO card this
* delay proved necessary for reliable card-insertion probing.
* 100us were not enough. Is this the same 140us delay, as in
* tmio_mmc_set_ios()?
*/
udelay(200);
}
/*
* It seems, VccQ should be switched on after Vcc, this is also what the
* omap_hsmmc.c driver does.
*/
if (!IS_ERR(mmc->supply.vqmmc) && !ret) {
regulator_enable(mmc->supply.vqmmc);
udelay(200);
}
}
static void tmio_mmc_power_off(struct tmio_mmc_host *host)
{
struct mmc_host *mmc = host->mmc;
if (!IS_ERR(mmc->supply.vqmmc))
regulator_disable(mmc->supply.vqmmc);
if (!IS_ERR(mmc->supply.vmmc))
/* Errors ignored... */
mmc_regulator_set_ocr(mmc, mmc->supply.vmmc,
ios->power_mode ? ios->vdd : 0);
mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, 0);
if (host->set_pwr)
host->set_pwr(host->pdev, 0);
}
/* Set MMC clock / power.
......@@ -828,18 +862,20 @@ static void tmio_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
if (!host->power) {
tmio_mmc_clk_update(mmc);
pm_runtime_get_sync(dev);
host->power = true;
}
tmio_mmc_set_clock(host, ios->clock);
/* power up SD bus */
tmio_mmc_set_power(host, ios);
if (!host->power) {
/* power up SD card and the bus */
tmio_mmc_power_on(host, ios->vdd);
host->power = true;
}
/* start bus clock */
tmio_mmc_clk_start(host);
} else if (ios->power_mode != MMC_POWER_UP) {
if (ios->power_mode == MMC_POWER_OFF)
tmio_mmc_set_power(host, ios);
if (host->power) {
struct tmio_mmc_data *pdata = host->pdata;
if (ios->power_mode == MMC_POWER_OFF)
tmio_mmc_power_off(host);
tmio_mmc_clk_stop(host);
host->power = false;
pm_runtime_put(dev);
......@@ -918,6 +954,17 @@ static void tmio_mmc_init_ocr(struct tmio_mmc_host *host)
dev_warn(mmc_dev(mmc), "Platform OCR mask is ignored\n");
}
static void tmio_mmc_of_parse(struct platform_device *pdev,
struct tmio_mmc_data *pdata)
{
const struct device_node *np = pdev->dev.of_node;
if (!np)
return;
if (of_get_property(np, "toshiba,mmc-wrprotect-disable", NULL))
pdata->flags |= TMIO_MMC_WRPROTECT_DISABLE;
}
int tmio_mmc_host_probe(struct tmio_mmc_host **host,
struct platform_device *pdev,
struct tmio_mmc_data *pdata)
......@@ -928,6 +975,11 @@ int tmio_mmc_host_probe(struct tmio_mmc_host **host,
int ret;
u32 irq_mask = TMIO_MASK_CMD;
tmio_mmc_of_parse(pdev, pdata);
if (!(pdata->flags & TMIO_MMC_HAS_IDLE_WAIT))
pdata->write16_hook = NULL;
res_ctl = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res_ctl)
return -EINVAL;
......@@ -936,6 +988,8 @@ int tmio_mmc_host_probe(struct tmio_mmc_host **host,
if (!mmc)
return -ENOMEM;
mmc_of_parse(mmc);
pdata->dev = &pdev->dev;
_host = mmc_priv(mmc);
_host->pdata = pdata;
......@@ -956,7 +1010,7 @@ int tmio_mmc_host_probe(struct tmio_mmc_host **host,
}
mmc->ops = &tmio_mmc_ops;
mmc->caps = MMC_CAP_4_BIT_DATA | pdata->capabilities;
mmc->caps |= MMC_CAP_4_BIT_DATA | pdata->capabilities;
mmc->caps2 = pdata->capabilities2;
mmc->max_segs = 32;
mmc->max_blk_size = 512;
......@@ -968,7 +1022,8 @@ int tmio_mmc_host_probe(struct tmio_mmc_host **host,
_host->native_hotplug = !(pdata->flags & TMIO_MMC_USE_GPIO_CD ||
mmc->caps & MMC_CAP_NEEDS_POLL ||
mmc->caps & MMC_CAP_NONREMOVABLE);
mmc->caps & MMC_CAP_NONREMOVABLE ||
mmc->slot.cd_irq >= 0);
_host->power = false;
pm_runtime_enable(&pdev->dev);
......@@ -1060,16 +1115,8 @@ EXPORT_SYMBOL(tmio_mmc_host_probe);
void tmio_mmc_host_remove(struct tmio_mmc_host *host)
{
struct platform_device *pdev = host->pdev;
struct tmio_mmc_data *pdata = host->pdata;
struct mmc_host *mmc = host->mmc;
if (pdata->flags & TMIO_MMC_USE_GPIO_CD)
/*
* This means we can miss a card-eject, but this is anyway
* possible, because of delayed processing of hotplug events.
*/
mmc_gpio_free_cd(mmc);
if (!host->native_hotplug)
pm_runtime_get_sync(&pdev->dev);
......
......@@ -1012,7 +1012,7 @@ static const struct dev_pm_ops wmt_mci_pm = {
static struct platform_driver wmt_mci_driver = {
.probe = wmt_mci_probe,
.remove = __exit_p(wmt_mci_remove),
.remove = wmt_mci_remove,
.driver = {
.name = DRIVER_NAME,
.owner = THIS_MODULE,
......
......@@ -64,12 +64,6 @@
* Some controllers can support SDIO IRQ signalling.
*/
#define TMIO_MMC_SDIO_IRQ (1 << 2)
/*
* Some platforms can detect card insertion events with controller powered
* down, using a GPIO IRQ, in which case they have to fill in cd_irq, cd_gpio,
* and cd_flags fields of struct tmio_mmc_data.
*/
#define TMIO_MMC_HAS_COLD_CD (1 << 3)
/*
* Some controllers require waiting for the SD bus to become
* idle before writing to some registers.
......@@ -116,18 +110,6 @@ struct tmio_mmc_data {
void (*clk_disable)(struct platform_device *pdev);
};
/*
* This function is deprecated and will be removed soon. Please, convert your
* platform to use drivers/mmc/core/cd-gpio.c
*/
#include <linux/mmc/host.h>
static inline void tmio_mmc_cd_wakeup(struct tmio_mmc_data *pdata)
{
if (pdata)
mmc_detect_change(dev_get_drvdata(pdata->dev),
msecs_to_jiffies(100));
}
/*
* data for the NAND controller
*/
......
......@@ -53,6 +53,9 @@ struct mmc_ext_csd {
u8 part_config;
u8 cache_ctrl;
u8 rst_n_function;
u8 max_packed_writes;
u8 max_packed_reads;
u8 packed_event_en;
unsigned int part_time; /* Units: ms */
unsigned int sa_timeout; /* Units: 100ns */
unsigned int generic_cmd6_time; /* Units: 10ms */
......@@ -83,7 +86,7 @@ struct mmc_ext_csd {
unsigned int data_tag_unit_size; /* DATA TAG UNIT size */
unsigned int boot_ro_lock; /* ro lock support */
bool boot_ro_lockable;
u8 raw_exception_status; /* 53 */
u8 raw_exception_status; /* 54 */
u8 raw_partition_support; /* 160 */
u8 raw_rpmb_size_mult; /* 168 */
u8 raw_erased_mem_count; /* 181 */
......@@ -187,6 +190,18 @@ struct sdio_func_tuple;
#define SDIO_MAX_FUNCS 7
enum mmc_blk_status {
MMC_BLK_SUCCESS = 0,
MMC_BLK_PARTIAL,
MMC_BLK_CMD_ERR,
MMC_BLK_RETRY,
MMC_BLK_ABORT,
MMC_BLK_DATA_ERR,
MMC_BLK_ECC_ERR,
MMC_BLK_NOMEDIUM,
MMC_BLK_NEW_REQUEST,
};
/* The number of MMC physical partitions. These consist of:
* boot partitions (2), general purpose partitions (4) in MMC v4.4.
*/
......@@ -295,6 +310,11 @@ static inline void mmc_part_add(struct mmc_card *card, unsigned int size,
card->nr_parts++;
}
static inline bool mmc_large_sector(struct mmc_card *card)
{
return card->ext_csd.data_sector_size == 4096;
}
/*
* The world is not perfect and supplies us with broken mmc/sdio devices.
* For at least some of these bugs we need a work-around.
......
......@@ -18,6 +18,9 @@ struct mmc_request;
struct mmc_command {
u32 opcode;
u32 arg;
#define MMC_CMD23_ARG_REL_WR (1 << 31)
#define MMC_CMD23_ARG_PACKED ((0 << 31) | (1 << 30))
#define MMC_CMD23_ARG_TAG_REQ (1 << 29)
u32 resp[4];
unsigned int flags; /* expected response type */
#define MMC_RSP_PRESENT (1 << 0)
......@@ -120,6 +123,7 @@ struct mmc_data {
s32 host_cookie; /* host private data */
};
struct mmc_host;
struct mmc_request {
struct mmc_command *sbc; /* SET_BLOCK_COUNT for multiblock */
struct mmc_command *cmd;
......@@ -128,9 +132,9 @@ struct mmc_request {
struct completion completion;
void (*done)(struct mmc_request *);/* completion function */
struct mmc_host *host;
};
struct mmc_host;
struct mmc_card;
struct mmc_async_req;
......@@ -147,6 +151,7 @@ extern int mmc_wait_for_app_cmd(struct mmc_host *, struct mmc_card *,
extern void mmc_start_bkops(struct mmc_card *card, bool from_exception);
extern int __mmc_switch(struct mmc_card *, u8, u8, u8, unsigned int, bool);
extern int mmc_switch(struct mmc_card *, u8, u8, u8, unsigned int);
extern int mmc_send_ext_csd(struct mmc_card *card, u8 *ext_csd);
#define MMC_ERASE_ARG 0x00000000
#define MMC_SECURE_ERASE_ARG 0x80000000
......
......@@ -209,8 +209,10 @@ struct dw_mci_dma_ops {
#define DW_MCI_QUIRK_HIGHSPEED BIT(2)
/* Unreliable card detection */
#define DW_MCI_QUIRK_BROKEN_CARD_DETECTION BIT(3)
/* Write Protect detection not available */
#define DW_MCI_QUIRK_NO_WRITE_PROTECT BIT(4)
/* Slot level quirks */
/* This slot has no write protect */
#define DW_MCI_SLOT_QUIRK_NO_WRITE_PROTECT BIT(0)
struct dma_pdata;
......
......@@ -131,9 +131,11 @@ struct mmc_host_ops {
int (*start_signal_voltage_switch)(struct mmc_host *host, struct mmc_ios *ios);
/* Check if the card is pulling dat[0:3] low */
int (*card_busy)(struct mmc_host *host);
/* The tuning command opcode value is different for SD and eMMC cards */
int (*execute_tuning)(struct mmc_host *host, u32 opcode);
void (*enable_preset_value)(struct mmc_host *host, bool enable);
int (*select_drive_strength)(unsigned int max_dtr, int host_drv, int card_drv);
void (*hw_reset)(struct mmc_host *host);
void (*card_event)(struct mmc_host *host);
......@@ -170,6 +172,22 @@ struct mmc_slot {
void *handler_priv;
};
/**
* mmc_context_info - synchronization details for mmc context
* @is_done_rcv wake up reason was done request
* @is_new_req wake up reason was new request
* @is_waiting_last_req mmc context waiting for single running request
* @wait wait queue
* @lock lock to protect data fields
*/
struct mmc_context_info {
bool is_done_rcv;
bool is_new_req;
bool is_waiting_last_req;
wait_queue_head_t wait;
spinlock_t lock;
};
struct regulator;
struct mmc_supply {
......@@ -258,6 +276,10 @@ struct mmc_host {
#define MMC_CAP2_HC_ERASE_SZ (1 << 9) /* High-capacity erase size */
#define MMC_CAP2_CD_ACTIVE_HIGH (1 << 10) /* Card-detect signal active high */
#define MMC_CAP2_RO_ACTIVE_HIGH (1 << 11) /* Write-protect signal active high */
#define MMC_CAP2_PACKED_RD (1 << 12) /* Allow packed read */
#define MMC_CAP2_PACKED_WR (1 << 13) /* Allow packed write */
#define MMC_CAP2_PACKED_CMD (MMC_CAP2_PACKED_RD | \
MMC_CAP2_PACKED_WR)
mmc_pm_flag_t pm_caps; /* supported pm features */
......@@ -331,6 +353,7 @@ struct mmc_host {
struct dentry *debugfs_root;
struct mmc_async_req *areq; /* active async req */
struct mmc_context_info context_info; /* async synchronization info */
#ifdef CONFIG_FAIL_MMC_REQUEST
struct fault_attr fail_mmc_request;
......@@ -341,10 +364,11 @@ struct mmc_host {
unsigned long private[0] ____cacheline_aligned;
};
extern struct mmc_host *mmc_alloc_host(int extra, struct device *);
extern int mmc_add_host(struct mmc_host *);
extern void mmc_remove_host(struct mmc_host *);
extern void mmc_free_host(struct mmc_host *);
struct mmc_host *mmc_alloc_host(int extra, struct device *);
int mmc_add_host(struct mmc_host *);
void mmc_remove_host(struct mmc_host *);
void mmc_free_host(struct mmc_host *);
void mmc_of_parse(struct mmc_host *host);
static inline void *mmc_priv(struct mmc_host *host)
{
......@@ -357,16 +381,16 @@ static inline void *mmc_priv(struct mmc_host *host)
#define mmc_classdev(x) (&(x)->class_dev)
#define mmc_hostname(x) (dev_name(&(x)->class_dev))
extern int mmc_suspend_host(struct mmc_host *);
extern int mmc_resume_host(struct mmc_host *);
int mmc_suspend_host(struct mmc_host *);
int mmc_resume_host(struct mmc_host *);
extern int mmc_power_save_host(struct mmc_host *host);
extern int mmc_power_restore_host(struct mmc_host *host);
int mmc_power_save_host(struct mmc_host *host);
int mmc_power_restore_host(struct mmc_host *host);
extern void mmc_detect_change(struct mmc_host *, unsigned long delay);
extern void mmc_request_done(struct mmc_host *, struct mmc_request *);
void mmc_detect_change(struct mmc_host *, unsigned long delay);
void mmc_request_done(struct mmc_host *, struct mmc_request *);
extern int mmc_cache_ctrl(struct mmc_host *, u8);
int mmc_cache_ctrl(struct mmc_host *, u8);
static inline void mmc_signal_sdio_irq(struct mmc_host *host)
{
......@@ -434,6 +458,19 @@ static inline int mmc_boot_partition_access(struct mmc_host *host)
return !(host->caps2 & MMC_CAP2_BOOTPART_NOACC);
}
static inline int mmc_host_uhs(struct mmc_host *host)
{
return host->caps &
(MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25 |
MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR104 |
MMC_CAP_UHS_DDR50);
}
static inline int mmc_host_packed_wr(struct mmc_host *host)
{
return host->caps2 & MMC_CAP2_PACKED_WR;
}
#ifdef CONFIG_MMC_CLKGATE
void mmc_host_clk_hold(struct mmc_host *host);
void mmc_host_clk_release(struct mmc_host *host);
......
......@@ -139,7 +139,7 @@ static inline bool mmc_op_multi(u32 opcode)
#define R1_CURRENT_STATE(x) ((x & 0x00001E00) >> 9) /* sx, b (4 bits) */
#define R1_READY_FOR_DATA (1 << 8) /* sx, a */
#define R1_SWITCH_ERROR (1 << 7) /* sx, c */
#define R1_EXCEPTION_EVENT (1 << 6) /* sx, a */
#define R1_EXCEPTION_EVENT (1 << 6) /* sr, a */
#define R1_APP_CMD (1 << 5) /* sr, c */
#define R1_STATE_IDLE 0
......@@ -275,7 +275,10 @@ struct _mmc_csd {
#define EXT_CSD_FLUSH_CACHE 32 /* W */
#define EXT_CSD_CACHE_CTRL 33 /* R/W */
#define EXT_CSD_POWER_OFF_NOTIFICATION 34 /* R/W */
#define EXT_CSD_EXP_EVENTS_STATUS 54 /* RO */
#define EXT_CSD_PACKED_FAILURE_INDEX 35 /* RO */
#define EXT_CSD_PACKED_CMD_STATUS 36 /* RO */
#define EXT_CSD_EXP_EVENTS_STATUS 54 /* RO, 2 bytes */
#define EXT_CSD_EXP_EVENTS_CTRL 56 /* R/W, 2 bytes */
#define EXT_CSD_DATA_SECTOR_SIZE 61 /* R */
#define EXT_CSD_GP_SIZE_MULT 143 /* R/W */
#define EXT_CSD_PARTITION_ATTRIBUTE 156 /* R/W */
......@@ -324,6 +327,8 @@ struct _mmc_csd {
#define EXT_CSD_CACHE_SIZE 249 /* RO, 4 bytes */
#define EXT_CSD_TAG_UNIT_SIZE 498 /* RO */
#define EXT_CSD_DATA_TAG_SUPPORT 499 /* RO */
#define EXT_CSD_MAX_PACKED_WRITES 500 /* RO */
#define EXT_CSD_MAX_PACKED_READS 501 /* RO */
#define EXT_CSD_BKOPS_SUPPORT 502 /* RO */
#define EXT_CSD_HPI_FEATURES 503 /* RO */
......@@ -385,6 +390,9 @@ struct _mmc_csd {
#define EXT_CSD_PWR_CL_4BIT_MASK 0x0F /* 8 bit PWR CLS */
#define EXT_CSD_PWR_CL_8BIT_SHIFT 4
#define EXT_CSD_PWR_CL_4BIT_SHIFT 0
#define EXT_CSD_PACKED_EVENT_EN BIT(3)
/*
* EXCEPTION_EVENT_STATUS field
*/
......@@ -393,6 +401,9 @@ struct _mmc_csd {
#define EXT_CSD_SYSPOOL_EXHAUSTED BIT(2)
#define EXT_CSD_PACKED_FAILURE BIT(3)
#define EXT_CSD_PACKED_GENERIC_ERROR BIT(0)
#define EXT_CSD_PACKED_INDEXED_ERROR BIT(1)
/*
* BKOPS status level
*/
......
......@@ -94,6 +94,7 @@ struct sdhci_host {
#define SDHCI_QUIRK2_HOST_NO_CMD23 (1<<1)
/* The system physically doesn't support 1.8v, even if the host does */
#define SDHCI_QUIRK2_NO_1_8_V (1<<2)
#define SDHCI_QUIRK2_PRESET_VALUE_BROKEN (1<<3)
int irq; /* Device IRQ */
void __iomem *ioaddr; /* Mapped address */
......
......@@ -4,7 +4,6 @@
#include <linux/types.h>
struct platform_device;
struct tmio_mmc_data;
#define SH_MOBILE_SDHI_IRQ_CARD_DETECT "card_detect"
#define SH_MOBILE_SDHI_IRQ_SDCARD "sdcard"
......@@ -26,7 +25,6 @@ struct sh_mobile_sdhi_info {
unsigned long tmio_caps2;
u32 tmio_ocr_mask; /* available MMC voltages */
unsigned int cd_gpio;
struct tmio_mmc_data *pdata;
void (*set_pwr)(struct platform_device *pdev, int state);
int (*get_cd)(struct platform_device *pdev);
......
......@@ -39,5 +39,6 @@ struct esdhc_platform_data {
unsigned int cd_gpio;
enum wp_types wp_type;
enum cd_types cd_type;
int max_bus_width;
};
#endif /* __ASM_ARCH_IMX_ESDHC_H */
/*
* Copyright (C) 2009 Palm, Inc.
* Author: Yvonne Yip <y@palm.com>
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __PLATFORM_DATA_TEGRA_SDHCI_H
#define __PLATFORM_DATA_TEGRA_SDHCI_H
#include <linux/mmc/host.h>
struct tegra_sdhci_platform_data {
int cd_gpio;
int wp_gpio;
int power_gpio;
int is_8bit;
int pm_flags;
};
#endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment