Commit 0df55ea5 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cjb/mmc

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cjb/mmc: (78 commits)
  mmc: MAINTAINERS: add myself as a tmio-mmc maintainer
  mmc: print debug messages for runtime PM actions
  mmc: fix runtime PM with -ENOSYS suspend case
  mmc: at91_mci: move register header from include/ to drivers/
  mmc: mxs-mmc: fix clock rate setting
  mmc: tmio: fix a deadlock
  mmc: tmio: fix a recently introduced bug in DMA code
  mmc: sh_mmcif: maximize power saving
  mmc: tmio: maximize power saving
  mmc: tmio: fix recursive spinlock, don't schedule with interrupts disabled
  mmc: Added quirks for Ricoh 1180:e823 lower base clock frequency
  mmc: omap_hsmmc: fix oops in omap_hsmmc_dma_cb()
  mmc: omap_hsmmc: refactor duplicated code
  mmc: omap_hsmmc: fix a few bugs when setting the clock divisor
  mmc: omap_hsmmc: introduce start_clock and re-use stop_clock
  mmc: omap_hsmmc: split duplicate code to calc_divisor() function
  mmc: omap_hsmmc: move hardcoded frequency constants to defines
  mmc: omap_hsmmc: correct debug report error status mnemonics
  mmc: block: fixed NULL pointer dereference
  mmc: documentation of mmc non-blocking request usage and design.
  ...
parents c1f792a5 d1057c40
......@@ -4,3 +4,5 @@ mmc-dev-attrs.txt
- info on SD and MMC device attributes
mmc-dev-parts.txt
- info on SD and MMC device partitions
mmc-async-req.txt
- info on mmc asynchronous requests
Rationale
=========
How significant is the cache maintenance overhead?
It depends. Fast eMMC and multiple cache levels with speculative cache
pre-fetch makes the cache overhead relatively significant. If the DMA
preparations for the next request are done in parallel with the current
transfer, the DMA preparation overhead would not affect the MMC performance.
The intention of non-blocking (asynchronous) MMC requests is to minimize the
time between when an MMC request ends and another MMC request begins.
Using mmc_wait_for_req(), the MMC controller is idle while dma_map_sg and
dma_unmap_sg are processing. Using non-blocking MMC requests makes it
possible to prepare the caches for next job in parallel with an active
MMC request.
MMC block driver
================
The mmc_blk_issue_rw_rq() in the MMC block driver is made non-blocking.
The increase in throughput is proportional to the time it takes to
prepare (major part of preparations are dma_map_sg() and dma_unmap_sg())
a request and how fast the memory is. The faster the MMC/SD is the
more significant the prepare request time becomes. Roughly the expected
performance gain is 5% for large writes and 10% on large reads on a L2 cache
platform. In power save mode, when clocks run on a lower frequency, the DMA
preparation may cost even more. As long as these slower preparations are run
in parallel with the transfer performance won't be affected.
Details on measurements from IOZone and mmc_test
================================================
https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req
MMC core API extension
======================
There is one new public function mmc_start_req().
It starts a new MMC command request for a host. The function isn't
truly non-blocking. If there is an ongoing async request it waits
for completion of that request and starts the new one and returns. It
doesn't wait for the new request to complete. If there is no ongoing
request it starts the new request and returns immediately.
MMC host extensions
===================
There are two optional members in the mmc_host_ops -- pre_req() and
post_req() -- that the host driver may implement in order to move work
to before and after the actual mmc_host_ops.request() function is called.
In the DMA case pre_req() may do dma_map_sg() and prepare the DMA
descriptor, and post_req() runs the dma_unmap_sg().
Optimize for the first request
==============================
The first request in a series of requests can't be prepared in parallel
with the previous transfer, since there is no previous request.
The argument is_first_req in pre_req() indicates that there is no previous
request. The host driver may optimize for this scenario to minimize
the performance loss. A way to optimize for this is to split the current
request in two chunks, prepare the first chunk and start the request,
and finally prepare the second chunk and start the transfer.
Pseudocode to handle is_first_req scenario with minimal prepare overhead:
if (is_first_req && req->size > threshold)
/* start MMC transfer for the complete transfer size */
mmc_start_command(MMC_CMD_TRANSFER_FULL_SIZE);
/*
* Begin to prepare DMA while cmd is being processed by MMC.
* The first chunk of the request should take the same time
* to prepare as the "MMC process command time".
* If prepare time exceeds MMC cmd time
* the transfer is delayed, guesstimate max 4k as first chunk size.
*/
prepare_1st_chunk_for_dma(req);
/* flush pending desc to the DMAC (dmaengine.h) */
dma_issue_pending(req->dma_desc);
prepare_2nd_chunk_for_dma(req);
/*
* The second issue_pending should be called before MMC runs out
* of the first chunk. If the MMC runs out of the first data chunk
* before this call, the transfer is delayed.
*/
dma_issue_pending(req->dma_desc);
......@@ -4585,9 +4585,8 @@ S: Maintained
F: drivers/mmc/host/omap.c
OMAP HS MMC SUPPORT
M: Madhusudhan Chikkature <madhu.cr@ti.com>
L: linux-omap@vger.kernel.org
S: Maintained
S: Orphan
F: drivers/mmc/host/omap_hsmmc.c
OMAP RANDOM NUMBER GENERATOR SUPPORT
......@@ -6243,9 +6242,14 @@ F: drivers/char/toshiba.c
F: include/linux/toshiba.h
TMIO MMC DRIVER
M: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
M: Ian Molton <ian@mnementh.co.uk>
L: linux-mmc@vger.kernel.org
S: Maintained
F: drivers/mmc/host/tmio_mmc.*
F: drivers/mmc/host/tmio_mmc*
F: drivers/mmc/host/sh_mobile_sdhi.c
F: include/linux/mmc/tmio.h
F: include/linux/mmc/sh_mobile_sdhi.h
TMPFS (SHMEM FILESYSTEM)
M: Hugh Dickins <hughd@google.com>
......
......@@ -8,6 +8,7 @@ CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_ARCH_MMP=y
CONFIG_MACH_BROWNSTONE=y
CONFIG_MACH_FLINT=y
CONFIG_MACH_MARVELL_JASPER=y
CONFIG_HIGH_RES_TIMERS=y
......@@ -63,10 +64,16 @@ CONFIG_BACKLIGHT_MAX8925=y
# CONFIG_USB_SUPPORT is not set
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_MAX8925=y
CONFIG_MMC=y
# CONFIG_DNOTIFY is not set
CONFIG_INOTIFY=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
CONFIG_EXT4_FS=y
CONFIG_MSDOS_FS=y
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_JFFS2_FS=y
CONFIG_CRAMFS=y
CONFIG_NFS_FS=y
......@@ -81,7 +88,7 @@ CONFIG_DEBUG_KERNEL=y
# CONFIG_DEBUG_PREEMPT is not set
CONFIG_DEBUG_INFO=y
# CONFIG_RCU_CPU_STALL_DETECTOR is not set
CONFIG_DYNAMIC_DEBUG=y
# CONFIG_DYNAMIC_DEBUG is not set
CONFIG_DEBUG_USER=y
CONFIG_DEBUG_ERRORS=y
# CONFIG_CRYPTO_ANSI_CPRNG is not set
......
......@@ -177,9 +177,16 @@ static struct i2c_board_info brownstone_twsi1_info[] = {
};
static struct sdhci_pxa_platdata mmp2_sdh_platdata_mmc0 = {
.max_speed = 25000000,
.clk_delay_cycles = 0x1f,
};
static struct sdhci_pxa_platdata mmp2_sdh_platdata_mmc2 = {
.clk_delay_cycles = 0x1f,
.flags = PXA_FLAG_CARD_PERMANENT
| PXA_FLAG_SD_8_BIT_CAPABLE_SLOT,
};
static void __init brownstone_init(void)
{
mfp_config(ARRAY_AND_SIZE(brownstone_pin_config));
......@@ -189,6 +196,7 @@ static void __init brownstone_init(void)
mmp2_add_uart(3);
mmp2_add_twsi(1, NULL, ARRAY_AND_SIZE(brownstone_twsi1_info));
mmp2_add_sdhost(0, &mmp2_sdh_platdata_mmc0); /* SD/MMC */
mmp2_add_sdhost(2, &mmp2_sdh_platdata_mmc2); /* eMMC */
/* enable 5v regulator */
platform_device_register(&brownstone_v_5vp_device);
......
#ifndef __ASM_MACH_MMP2_H
#define __ASM_MACH_MMP2_H
#include <plat/sdhci.h>
#include <linux/platform_data/pxa_sdhci.h>
struct sys_timer;
......
......@@ -154,7 +154,7 @@ static struct i2c_board_info jasper_twsi1_info[] = {
};
static struct sdhci_pxa_platdata mmp2_sdh_platdata_mmc0 = {
.max_speed = 25000000,
.clk_delay_cycles = 0x1f,
};
static void __init jasper_init(void)
......
......@@ -168,10 +168,10 @@ static struct clk_lookup mmp2_clkregs[] = {
INIT_CLKREG(&clk_twsi5, "pxa2xx-i2c.4", NULL),
INIT_CLKREG(&clk_twsi6, "pxa2xx-i2c.5", NULL),
INIT_CLKREG(&clk_nand, "pxa3xx-nand", NULL),
INIT_CLKREG(&clk_sdh0, "sdhci-pxa.0", "PXA-SDHCLK"),
INIT_CLKREG(&clk_sdh1, "sdhci-pxa.1", "PXA-SDHCLK"),
INIT_CLKREG(&clk_sdh2, "sdhci-pxa.2", "PXA-SDHCLK"),
INIT_CLKREG(&clk_sdh3, "sdhci-pxa.3", "PXA-SDHCLK"),
INIT_CLKREG(&clk_sdh0, "sdhci-pxav3.0", "PXA-SDHCLK"),
INIT_CLKREG(&clk_sdh1, "sdhci-pxav3.1", "PXA-SDHCLK"),
INIT_CLKREG(&clk_sdh2, "sdhci-pxav3.2", "PXA-SDHCLK"),
INIT_CLKREG(&clk_sdh3, "sdhci-pxav3.3", "PXA-SDHCLK"),
};
static int __init mmp2_init(void)
......@@ -222,8 +222,8 @@ MMP2_DEVICE(twsi4, "pxa2xx-i2c", 3, TWSI4, 0xd4033000, 0x70);
MMP2_DEVICE(twsi5, "pxa2xx-i2c", 4, TWSI5, 0xd4033800, 0x70);
MMP2_DEVICE(twsi6, "pxa2xx-i2c", 5, TWSI6, 0xd4034000, 0x70);
MMP2_DEVICE(nand, "pxa3xx-nand", -1, NAND, 0xd4283000, 0x100, 28, 29);
MMP2_DEVICE(sdh0, "sdhci-pxa", 0, MMC, 0xd4280000, 0x120);
MMP2_DEVICE(sdh1, "sdhci-pxa", 1, MMC2, 0xd4280800, 0x120);
MMP2_DEVICE(sdh2, "sdhci-pxa", 2, MMC3, 0xd4281000, 0x120);
MMP2_DEVICE(sdh3, "sdhci-pxa", 3, MMC4, 0xd4281800, 0x120);
MMP2_DEVICE(sdh0, "sdhci-pxav3", 0, MMC, 0xd4280000, 0x120);
MMP2_DEVICE(sdh1, "sdhci-pxav3", 1, MMC2, 0xd4280800, 0x120);
MMP2_DEVICE(sdh2, "sdhci-pxav3", 2, MMC3, 0xd4281000, 0x120);
MMP2_DEVICE(sdh3, "sdhci-pxav3", 3, MMC4, 0xd4281800, 0x120);
This diff is collapsed.
This diff is collapsed.
......@@ -52,14 +52,18 @@ static int mmc_queue_thread(void *d)
down(&mq->thread_sem);
do {
struct request *req = NULL;
struct mmc_queue_req *tmp;
spin_lock_irq(q->queue_lock);
set_current_state(TASK_INTERRUPTIBLE);
req = blk_fetch_request(q);
mq->req = req;
mq->mqrq_cur->req = req;
spin_unlock_irq(q->queue_lock);
if (!req) {
if (req || mq->mqrq_prev->req) {
set_current_state(TASK_RUNNING);
mq->issue_fn(mq, req);
} else {
if (kthread_should_stop()) {
set_current_state(TASK_RUNNING);
break;
......@@ -67,11 +71,14 @@ static int mmc_queue_thread(void *d)
up(&mq->thread_sem);
schedule();
down(&mq->thread_sem);
continue;
}
set_current_state(TASK_RUNNING);
mq->issue_fn(mq, req);
/* Current request becomes previous request and vice versa. */
mq->mqrq_prev->brq.mrq.data = NULL;
mq->mqrq_prev->req = NULL;
tmp = mq->mqrq_prev;
mq->mqrq_prev = mq->mqrq_cur;
mq->mqrq_cur = tmp;
} while (1);
up(&mq->thread_sem);
......@@ -97,10 +104,46 @@ static void mmc_request(struct request_queue *q)
return;
}
if (!mq->req)
if (!mq->mqrq_cur->req && !mq->mqrq_prev->req)
wake_up_process(mq->thread);
}
struct scatterlist *mmc_alloc_sg(int sg_len, int *err)
{
struct scatterlist *sg;
sg = kmalloc(sizeof(struct scatterlist)*sg_len, GFP_KERNEL);
if (!sg)
*err = -ENOMEM;
else {
*err = 0;
sg_init_table(sg, sg_len);
}
return sg;
}
static void mmc_queue_setup_discard(struct request_queue *q,
struct mmc_card *card)
{
unsigned max_discard;
max_discard = mmc_calc_max_discard(card);
if (!max_discard)
return;
queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, q);
q->limits.max_discard_sectors = max_discard;
if (card->erased_byte == 0)
q->limits.discard_zeroes_data = 1;
q->limits.discard_granularity = card->pref_erase << 9;
/* granularity must not be greater than max. discard */
if (card->pref_erase > max_discard)
q->limits.discard_granularity = 0;
if (mmc_can_secure_erase_trim(card))
queue_flag_set_unlocked(QUEUE_FLAG_SECDISCARD, q);
}
/**
* mmc_init_queue - initialise a queue structure.
* @mq: mmc queue
......@@ -116,6 +159,8 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
struct mmc_host *host = card->host;
u64 limit = BLK_BOUNCE_HIGH;
int ret;
struct mmc_queue_req *mqrq_cur = &mq->mqrq[0];
struct mmc_queue_req *mqrq_prev = &mq->mqrq[1];
if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
limit = *mmc_dev(host)->dma_mask;
......@@ -125,21 +170,16 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
if (!mq->queue)
return -ENOMEM;
memset(&mq->mqrq_cur, 0, sizeof(mq->mqrq_cur));
memset(&mq->mqrq_prev, 0, sizeof(mq->mqrq_prev));
mq->mqrq_cur = mqrq_cur;
mq->mqrq_prev = mqrq_prev;
mq->queue->queuedata = mq;
mq->req = NULL;
blk_queue_prep_rq(mq->queue, mmc_prep_request);
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue);
if (mmc_can_erase(card)) {
queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, mq->queue);
mq->queue->limits.max_discard_sectors = UINT_MAX;
if (card->erased_byte == 0)
mq->queue->limits.discard_zeroes_data = 1;
mq->queue->limits.discard_granularity = card->pref_erase << 9;
if (mmc_can_secure_erase_trim(card))
queue_flag_set_unlocked(QUEUE_FLAG_SECDISCARD,
mq->queue);
}
if (mmc_can_erase(card))
mmc_queue_setup_discard(mq->queue, card);
#ifdef CONFIG_MMC_BLOCK_BOUNCE
if (host->max_segs == 1) {
......@@ -155,53 +195,64 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
bouncesz = host->max_blk_count * 512;
if (bouncesz > 512) {
mq->bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
if (!mq->bounce_buf) {
mqrq_cur->bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
if (!mqrq_cur->bounce_buf) {
printk(KERN_WARNING "%s: unable to "
"allocate bounce cur buffer\n",
mmc_card_name(card));
}
mqrq_prev->bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
if (!mqrq_prev->bounce_buf) {
printk(KERN_WARNING "%s: unable to "
"allocate bounce buffer\n",
"allocate bounce prev buffer\n",
mmc_card_name(card));
kfree(mqrq_cur->bounce_buf);
mqrq_cur->bounce_buf = NULL;
}
}
if (mq->bounce_buf) {
if (mqrq_cur->bounce_buf && mqrq_prev->bounce_buf) {
blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY);
blk_queue_max_hw_sectors(mq->queue, bouncesz / 512);
blk_queue_max_segments(mq->queue, bouncesz / 512);
blk_queue_max_segment_size(mq->queue, bouncesz);
mq->sg = kmalloc(sizeof(struct scatterlist),
GFP_KERNEL);
if (!mq->sg) {
ret = -ENOMEM;
mqrq_cur->sg = mmc_alloc_sg(1, &ret);
if (ret)
goto cleanup_queue;
}
sg_init_table(mq->sg, 1);
mq->bounce_sg = kmalloc(sizeof(struct scatterlist) *
bouncesz / 512, GFP_KERNEL);
if (!mq->bounce_sg) {
ret = -ENOMEM;
mqrq_cur->bounce_sg =
mmc_alloc_sg(bouncesz / 512, &ret);
if (ret)
goto cleanup_queue;
mqrq_prev->sg = mmc_alloc_sg(1, &ret);
if (ret)
goto cleanup_queue;
mqrq_prev->bounce_sg =
mmc_alloc_sg(bouncesz / 512, &ret);
if (ret)
goto cleanup_queue;
}
sg_init_table(mq->bounce_sg, bouncesz / 512);
}
}
#endif
if (!mq->bounce_buf) {
if (!mqrq_cur->bounce_buf && !mqrq_prev->bounce_buf) {
blk_queue_bounce_limit(mq->queue, limit);
blk_queue_max_hw_sectors(mq->queue,
min(host->max_blk_count, host->max_req_size / 512));
blk_queue_max_segments(mq->queue, host->max_segs);
blk_queue_max_segment_size(mq->queue, host->max_seg_size);
mq->sg = kmalloc(sizeof(struct scatterlist) *
host->max_segs, GFP_KERNEL);
if (!mq->sg) {
ret = -ENOMEM;
mqrq_cur->sg = mmc_alloc_sg(host->max_segs, &ret);
if (ret)
goto cleanup_queue;
mqrq_prev->sg = mmc_alloc_sg(host->max_segs, &ret);
if (ret)
goto cleanup_queue;
}
sg_init_table(mq->sg, host->max_segs);
}
sema_init(&mq->thread_sem, 1);
......@@ -216,16 +267,22 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
return 0;
free_bounce_sg:
if (mq->bounce_sg)
kfree(mq->bounce_sg);
mq->bounce_sg = NULL;
kfree(mqrq_cur->bounce_sg);
mqrq_cur->bounce_sg = NULL;
kfree(mqrq_prev->bounce_sg);
mqrq_prev->bounce_sg = NULL;
cleanup_queue:
if (mq->sg)
kfree(mq->sg);
mq->sg = NULL;
if (mq->bounce_buf)
kfree(mq->bounce_buf);
mq->bounce_buf = NULL;
kfree(mqrq_cur->sg);
mqrq_cur->sg = NULL;
kfree(mqrq_cur->bounce_buf);
mqrq_cur->bounce_buf = NULL;
kfree(mqrq_prev->sg);
mqrq_prev->sg = NULL;
kfree(mqrq_prev->bounce_buf);
mqrq_prev->bounce_buf = NULL;
blk_cleanup_queue(mq->queue);
return ret;
}
......@@ -234,6 +291,8 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
{
struct request_queue *q = mq->queue;
unsigned long flags;
struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
/* Make sure the queue isn't suspended, as that will deadlock */
mmc_queue_resume(mq);
......@@ -247,16 +306,23 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
blk_start_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
if (mq->bounce_sg)
kfree(mq->bounce_sg);
mq->bounce_sg = NULL;
kfree(mqrq_cur->bounce_sg);
mqrq_cur->bounce_sg = NULL;
kfree(mq->sg);
mq->sg = NULL;
kfree(mqrq_cur->sg);
mqrq_cur->sg = NULL;
if (mq->bounce_buf)
kfree(mq->bounce_buf);
mq->bounce_buf = NULL;
kfree(mqrq_cur->bounce_buf);
mqrq_cur->bounce_buf = NULL;
kfree(mqrq_prev->bounce_sg);
mqrq_prev->bounce_sg = NULL;
kfree(mqrq_prev->sg);
mqrq_prev->sg = NULL;
kfree(mqrq_prev->bounce_buf);
mqrq_prev->bounce_buf = NULL;
mq->card = NULL;
}
......@@ -309,27 +375,27 @@ void mmc_queue_resume(struct mmc_queue *mq)
/*
* Prepare the sg list(s) to be handed of to the host driver
*/
unsigned int mmc_queue_map_sg(struct mmc_queue *mq)
unsigned int mmc_queue_map_sg(struct mmc_queue *mq, struct mmc_queue_req *mqrq)
{
unsigned int sg_len;
size_t buflen;
struct scatterlist *sg;
int i;
if (!mq->bounce_buf)
return blk_rq_map_sg(mq->queue, mq->req, mq->sg);
if (!mqrq->bounce_buf)
return blk_rq_map_sg(mq->queue, mqrq->req, mqrq->sg);
BUG_ON(!mq->bounce_sg);
BUG_ON(!mqrq->bounce_sg);
sg_len = blk_rq_map_sg(mq->queue, mq->req, mq->bounce_sg);
sg_len = blk_rq_map_sg(mq->queue, mqrq->req, mqrq->bounce_sg);
mq->bounce_sg_len = sg_len;
mqrq->bounce_sg_len = sg_len;
buflen = 0;
for_each_sg(mq->bounce_sg, sg, sg_len, i)
for_each_sg(mqrq->bounce_sg, sg, sg_len, i)
buflen += sg->length;
sg_init_one(mq->sg, mq->bounce_buf, buflen);
sg_init_one(mqrq->sg, mqrq->bounce_buf, buflen);
return 1;
}
......@@ -338,31 +404,30 @@ unsigned int mmc_queue_map_sg(struct mmc_queue *mq)
* If writing, bounce the data to the buffer before the request
* is sent to the host driver
*/
void mmc_queue_bounce_pre(struct mmc_queue *mq)
void mmc_queue_bounce_pre(struct mmc_queue_req *mqrq)
{
if (!mq->bounce_buf)
if (!mqrq->bounce_buf)
return;
if (rq_data_dir(mq->req) != WRITE)
if (rq_data_dir(mqrq->req) != WRITE)
return;
sg_copy_to_buffer(mq->bounce_sg, mq->bounce_sg_len,
mq->bounce_buf, mq->sg[0].length);
sg_copy_to_buffer(mqrq->bounce_sg, mqrq->bounce_sg_len,
mqrq->bounce_buf, mqrq->sg[0].length);
}
/*
* If reading, bounce the data from the buffer after the request
* has been handled by the host driver
*/
void mmc_queue_bounce_post(struct mmc_queue *mq)
void mmc_queue_bounce_post(struct mmc_queue_req *mqrq)
{
if (!mq->bounce_buf)
if (!mqrq->bounce_buf)
return;
if (rq_data_dir(mq->req) != READ)
if (rq_data_dir(mqrq->req) != READ)
return;
sg_copy_from_buffer(mq->bounce_sg, mq->bounce_sg_len,
mq->bounce_buf, mq->sg[0].length);
sg_copy_from_buffer(mqrq->bounce_sg, mqrq->bounce_sg_len,
mqrq->bounce_buf, mqrq->sg[0].length);
}
......@@ -4,19 +4,35 @@
struct request;
struct task_struct;
struct mmc_blk_request {
struct mmc_request mrq;
struct mmc_command sbc;
struct mmc_command cmd;
struct mmc_command stop;
struct mmc_data data;
};
struct mmc_queue_req {
struct request *req;
struct mmc_blk_request brq;
struct scatterlist *sg;
char *bounce_buf;
struct scatterlist *bounce_sg;
unsigned int bounce_sg_len;
struct mmc_async_req mmc_active;
};
struct mmc_queue {
struct mmc_card *card;
struct task_struct *thread;
struct semaphore thread_sem;
unsigned int flags;
struct request *req;
int (*issue_fn)(struct mmc_queue *, struct request *);
void *data;
struct request_queue *queue;
struct scatterlist *sg;
char *bounce_buf;
struct scatterlist *bounce_sg;
unsigned int bounce_sg_len;
struct mmc_queue_req mqrq[2];
struct mmc_queue_req *mqrq_cur;
struct mmc_queue_req *mqrq_prev;
};
extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
......@@ -25,8 +41,9 @@ extern void mmc_cleanup_queue(struct mmc_queue *);
extern void mmc_queue_suspend(struct mmc_queue *);
extern void mmc_queue_resume(struct mmc_queue *);
extern unsigned int mmc_queue_map_sg(struct mmc_queue *);
extern void mmc_queue_bounce_pre(struct mmc_queue *);
extern void mmc_queue_bounce_post(struct mmc_queue *);
extern unsigned int mmc_queue_map_sg(struct mmc_queue *,
struct mmc_queue_req *);
extern void mmc_queue_bounce_pre(struct mmc_queue_req *);
extern void mmc_queue_bounce_post(struct mmc_queue_req *);
#endif
......@@ -198,9 +198,109 @@ mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
static void mmc_wait_done(struct mmc_request *mrq)
{
complete(mrq->done_data);
complete(&mrq->completion);
}
static void __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
{
init_completion(&mrq->completion);
mrq->done = mmc_wait_done;
mmc_start_request(host, mrq);
}
static void mmc_wait_for_req_done(struct mmc_host *host,
struct mmc_request *mrq)
{
wait_for_completion(&mrq->completion);
}
/**
* mmc_pre_req - Prepare for a new request
* @host: MMC host to prepare command
* @mrq: MMC request to prepare for
* @is_first_req: true if there is no previous started request
* that may run in parellel to this call, otherwise false
*
* mmc_pre_req() is called in prior to mmc_start_req() to let
* host prepare for the new request. Preparation of a request may be
* performed while another request is running on the host.
*/
static void mmc_pre_req(struct mmc_host *host, struct mmc_request *mrq,
bool is_first_req)
{
if (host->ops->pre_req)
host->ops->pre_req(host, mrq, is_first_req);
}
/**
* mmc_post_req - Post process a completed request
* @host: MMC host to post process command
* @mrq: MMC request to post process for
* @err: Error, if non zero, clean up any resources made in pre_req
*
* Let the host post process a completed request. Post processing of
* a request may be performed while another reuqest is running.
*/
static void mmc_post_req(struct mmc_host *host, struct mmc_request *mrq,
int err)
{
if (host->ops->post_req)
host->ops->post_req(host, mrq, err);
}
/**
* mmc_start_req - start a non-blocking request
* @host: MMC host to start command
* @areq: async request to start
* @error: out parameter returns 0 for success, otherwise non zero
*
* Start a new MMC custom command request for a host.
* If there is on ongoing async request wait for completion
* of that request and start the new one and return.
* Does not wait for the new request to complete.
*
* Returns the completed request, NULL in case of none completed.
* Wait for the an ongoing request (previoulsy started) to complete and
* return the completed request. If there is no ongoing request, NULL
* is returned without waiting. NULL is not an error condition.
*/
struct mmc_async_req *mmc_start_req(struct mmc_host *host,
struct mmc_async_req *areq, int *error)
{
int err = 0;
struct mmc_async_req *data = host->areq;
/* Prepare a new request */
if (areq)
mmc_pre_req(host, areq->mrq, !host->areq);
if (host->areq) {
mmc_wait_for_req_done(host, host->areq->mrq);
err = host->areq->err_check(host->card, host->areq);
if (err) {
mmc_post_req(host, host->areq->mrq, 0);
if (areq)
mmc_post_req(host, areq->mrq, -EINVAL);
host->areq = NULL;
goto out;
}
}
if (areq)
__mmc_start_req(host, areq->mrq);
if (host->areq)
mmc_post_req(host, host->areq->mrq, 0);
host->areq = areq;
out:
if (error)
*error = err;
return data;
}
EXPORT_SYMBOL(mmc_start_req);
/**
* mmc_wait_for_req - start a request and wait for completion
* @host: MMC host to start command
......@@ -212,16 +312,9 @@ static void mmc_wait_done(struct mmc_request *mrq)
*/
void mmc_wait_for_req(struct mmc_host *host, struct mmc_request *mrq)
{
DECLARE_COMPLETION_ONSTACK(complete);
mrq->done_data = &complete;
mrq->done = mmc_wait_done;
mmc_start_request(host, mrq);
wait_for_completion(&complete);
__mmc_start_req(host, mrq);
mmc_wait_for_req_done(host, mrq);
}
EXPORT_SYMBOL(mmc_wait_for_req);
/**
......@@ -1516,6 +1609,82 @@ int mmc_erase_group_aligned(struct mmc_card *card, unsigned int from,
}
EXPORT_SYMBOL(mmc_erase_group_aligned);
static unsigned int mmc_do_calc_max_discard(struct mmc_card *card,
unsigned int arg)
{
struct mmc_host *host = card->host;
unsigned int max_discard, x, y, qty = 0, max_qty, timeout;
unsigned int last_timeout = 0;
if (card->erase_shift)
max_qty = UINT_MAX >> card->erase_shift;
else if (mmc_card_sd(card))
max_qty = UINT_MAX;
else
max_qty = UINT_MAX / card->erase_size;
/* Find the largest qty with an OK timeout */
do {
y = 0;
for (x = 1; x && x <= max_qty && max_qty - x >= qty; x <<= 1) {
timeout = mmc_erase_timeout(card, arg, qty + x);
if (timeout > host->max_discard_to)
break;
if (timeout < last_timeout)
break;
last_timeout = timeout;
y = x;
}
qty += y;
} while (y);
if (!qty)
return 0;
if (qty == 1)
return 1;
/* Convert qty to sectors */
if (card->erase_shift)
max_discard = --qty << card->erase_shift;
else if (mmc_card_sd(card))
max_discard = qty;
else
max_discard = --qty * card->erase_size;
return max_discard;
}
unsigned int mmc_calc_max_discard(struct mmc_card *card)
{
struct mmc_host *host = card->host;
unsigned int max_discard, max_trim;
if (!host->max_discard_to)
return UINT_MAX;
/*
* Without erase_group_def set, MMC erase timeout depends on clock
* frequence which can change. In that case, the best choice is
* just the preferred erase size.
*/
if (mmc_card_mmc(card) && !(card->ext_csd.erase_group_def & 1))
return card->pref_erase;
max_discard = mmc_do_calc_max_discard(card, MMC_ERASE_ARG);
if (mmc_can_trim(card)) {
max_trim = mmc_do_calc_max_discard(card, MMC_TRIM_ARG);
if (max_trim < max_discard)
max_discard = max_trim;
} else if (max_discard < card->erase_size) {
max_discard = 0;
}
pr_debug("%s: calculated max. discard sectors %u for timeout %u ms\n",
mmc_hostname(host), max_discard, host->max_discard_to);
return max_discard;
}
EXPORT_SYMBOL(mmc_calc_max_discard);
int mmc_set_blocklen(struct mmc_card *card, unsigned int blocklen)
{
struct mmc_command cmd = {0};
......@@ -1663,6 +1832,10 @@ int mmc_power_save_host(struct mmc_host *host)
{
int ret = 0;
#ifdef CONFIG_MMC_DEBUG
pr_info("%s: %s: powering down\n", mmc_hostname(host), __func__);
#endif
mmc_bus_get(host);
if (!host->bus_ops || host->bus_dead || !host->bus_ops->power_restore) {
......@@ -1685,6 +1858,10 @@ int mmc_power_restore_host(struct mmc_host *host)
{
int ret;
#ifdef CONFIG_MMC_DEBUG
pr_info("%s: %s: powering up\n", mmc_hostname(host), __func__);
#endif
mmc_bus_get(host);
if (!host->bus_ops || host->bus_dead || !host->bus_ops->power_restore) {
......
......@@ -409,52 +409,62 @@ int mmc_sd_switch_hs(struct mmc_card *card)
static int sd_select_driver_type(struct mmc_card *card, u8 *status)
{
int host_drv_type = 0, card_drv_type = 0;
int host_drv_type = SD_DRIVER_TYPE_B;
int card_drv_type = SD_DRIVER_TYPE_B;
int drive_strength;
int err;
/*
* If the host doesn't support any of the Driver Types A,C or D,
* default Driver Type B is used.
* or there is no board specific handler then default Driver
* Type B is used.
*/
if (!(card->host->caps & (MMC_CAP_DRIVER_TYPE_A | MMC_CAP_DRIVER_TYPE_C
| MMC_CAP_DRIVER_TYPE_D)))
return 0;
if (card->host->caps & MMC_CAP_DRIVER_TYPE_A) {
host_drv_type = MMC_SET_DRIVER_TYPE_A;
if (card->sw_caps.sd3_drv_type & SD_DRIVER_TYPE_A)
card_drv_type = MMC_SET_DRIVER_TYPE_A;
else if (card->sw_caps.sd3_drv_type & SD_DRIVER_TYPE_B)
card_drv_type = MMC_SET_DRIVER_TYPE_B;
else if (card->sw_caps.sd3_drv_type & SD_DRIVER_TYPE_C)
card_drv_type = MMC_SET_DRIVER_TYPE_C;
} else if (card->host->caps & MMC_CAP_DRIVER_TYPE_C) {
host_drv_type = MMC_SET_DRIVER_TYPE_C;
if (card->sw_caps.sd3_drv_type & SD_DRIVER_TYPE_C)
card_drv_type = MMC_SET_DRIVER_TYPE_C;
} else if (!(card->host->caps & MMC_CAP_DRIVER_TYPE_D)) {
/*
* If we are here, that means only the default driver type
* B is supported by the host.
*/
host_drv_type = MMC_SET_DRIVER_TYPE_B;
if (card->sw_caps.sd3_drv_type & SD_DRIVER_TYPE_B)
card_drv_type = MMC_SET_DRIVER_TYPE_B;
else if (card->sw_caps.sd3_drv_type & SD_DRIVER_TYPE_C)
card_drv_type = MMC_SET_DRIVER_TYPE_C;
}
if (!card->host->ops->select_drive_strength)
return 0;
if (card->host->caps & MMC_CAP_DRIVER_TYPE_A)
host_drv_type |= SD_DRIVER_TYPE_A;
if (card->host->caps & MMC_CAP_DRIVER_TYPE_C)
host_drv_type |= SD_DRIVER_TYPE_C;
if (card->host->caps & MMC_CAP_DRIVER_TYPE_D)
host_drv_type |= SD_DRIVER_TYPE_D;
if (card->sw_caps.sd3_drv_type & SD_DRIVER_TYPE_A)
card_drv_type |= SD_DRIVER_TYPE_A;
if (card->sw_caps.sd3_drv_type & SD_DRIVER_TYPE_C)
card_drv_type |= SD_DRIVER_TYPE_C;
if (card->sw_caps.sd3_drv_type & SD_DRIVER_TYPE_D)
card_drv_type |= SD_DRIVER_TYPE_D;
/*
* The drive strength that the hardware can support
* depends on the board design. Pass the appropriate
* information and let the hardware specific code
* return what is possible given the options
*/
drive_strength = card->host->ops->select_drive_strength(
card->sw_caps.uhs_max_dtr,
host_drv_type, card_drv_type);
err = mmc_sd_switch(card, 1, 2, card_drv_type, status);
err = mmc_sd_switch(card, 1, 2, drive_strength, status);
if (err)
return err;
if ((status[15] & 0xF) != card_drv_type) {
printk(KERN_WARNING "%s: Problem setting driver strength!\n",
if ((status[15] & 0xF) != drive_strength) {
printk(KERN_WARNING "%s: Problem setting drive strength!\n",
mmc_hostname(card->host));
return 0;
}
mmc_set_driver_type(card->host, host_drv_type);
mmc_set_driver_type(card->host, drive_strength);
return 0;
}
......
......@@ -167,11 +167,8 @@ static int sdio_bus_remove(struct device *dev)
int ret = 0;
/* Make sure card is powered before invoking ->remove() */
if (func->card->host->caps & MMC_CAP_POWER_OFF_CARD) {
ret = pm_runtime_get_sync(dev);
if (ret < 0)
goto out;
}
if (func->card->host->caps & MMC_CAP_POWER_OFF_CARD)
pm_runtime_get_sync(dev);
drv->remove(func);
......@@ -191,7 +188,6 @@ static int sdio_bus_remove(struct device *dev)
if (func->card->host->caps & MMC_CAP_POWER_OFF_CARD)
pm_runtime_put_sync(dev);
out:
return ret;
}
......
......@@ -81,28 +81,32 @@ config MMC_RICOH_MMC
If unsure, say Y.
config MMC_SDHCI_OF
tristate "SDHCI support on OpenFirmware platforms"
depends on MMC_SDHCI && OF
config MMC_SDHCI_PLTFM
tristate "SDHCI platform and OF driver helper"
depends on MMC_SDHCI
help
This selects the OF support for Secure Digital Host Controller
Interfaces.
This selects the common helper functions support for Secure Digital
Host Controller Interface based platform and OF drivers.
If you have a controller with this interface, say Y or M here.
If unsure, say N.
config MMC_SDHCI_OF_ESDHC
bool "SDHCI OF support for the Freescale eSDHC controller"
depends on MMC_SDHCI_OF
tristate "SDHCI OF support for the Freescale eSDHC controller"
depends on MMC_SDHCI_PLTFM
depends on PPC_OF
select MMC_SDHCI_BIG_ENDIAN_32BIT_BYTE_SWAPPER
help
This selects the Freescale eSDHC controller support.
If you have a controller with this interface, say Y or M here.
If unsure, say N.
config MMC_SDHCI_OF_HLWD
bool "SDHCI OF support for the Nintendo Wii SDHCI controllers"
depends on MMC_SDHCI_OF
tristate "SDHCI OF support for the Nintendo Wii SDHCI controllers"
depends on MMC_SDHCI_PLTFM
depends on PPC_OF
select MMC_SDHCI_BIG_ENDIAN_32BIT_BYTE_SWAPPER
help
......@@ -110,40 +114,36 @@ config MMC_SDHCI_OF_HLWD
found in the "Hollywood" chipset of the Nintendo Wii video game
console.
If unsure, say N.
config MMC_SDHCI_PLTFM
tristate "SDHCI support on the platform specific bus"
depends on MMC_SDHCI
help
This selects the platform specific bus support for Secure Digital Host
Controller Interface.
If you have a controller with this interface, say Y or M here.
If unsure, say N.
config MMC_SDHCI_CNS3XXX
bool "SDHCI support on the Cavium Networks CNS3xxx SoC"
tristate "SDHCI support on the Cavium Networks CNS3xxx SoC"
depends on ARCH_CNS3XXX
depends on MMC_SDHCI_PLTFM
help
This selects the SDHCI support for CNS3xxx System-on-Chip devices.
If you have a controller with this interface, say Y or M here.
If unsure, say N.
config MMC_SDHCI_ESDHC_IMX
bool "SDHCI platform support for the Freescale eSDHC i.MX controller"
depends on MMC_SDHCI_PLTFM && (ARCH_MX25 || ARCH_MX35 || ARCH_MX5)
tristate "SDHCI platform support for the Freescale eSDHC i.MX controller"
depends on ARCH_MX25 || ARCH_MX35 || ARCH_MX5
depends on MMC_SDHCI_PLTFM
select MMC_SDHCI_IO_ACCESSORS
help
This selects the Freescale eSDHC controller support on the platform
bus, found on platforms like mx35/51.
If you have a controller with this interface, say Y or M here.
If unsure, say N.
config MMC_SDHCI_DOVE
bool "SDHCI support on Marvell's Dove SoC"
tristate "SDHCI support on Marvell's Dove SoC"
depends on ARCH_DOVE
depends on MMC_SDHCI_PLTFM
select MMC_SDHCI_IO_ACCESSORS
......@@ -151,11 +151,14 @@ config MMC_SDHCI_DOVE
This selects the Secure Digital Host Controller Interface in
Marvell's Dove SoC.
If you have a controller with this interface, say Y or M here.
If unsure, say N.
config MMC_SDHCI_TEGRA
bool "SDHCI platform support for the Tegra SD/MMC Controller"
depends on MMC_SDHCI_PLTFM && ARCH_TEGRA
tristate "SDHCI platform support for the Tegra SD/MMC Controller"
depends on ARCH_TEGRA
depends on MMC_SDHCI_PLTFM
select MMC_SDHCI_IO_ACCESSORS
help
This selects the Tegra SD/MMC controller. If you have a Tegra
......@@ -178,14 +181,28 @@ config MMC_SDHCI_S3C
If unsure, say N.
config MMC_SDHCI_PXA
tristate "Marvell PXA168/PXA910/MMP2 SD Host Controller support"
depends on ARCH_PXA || ARCH_MMP
config MMC_SDHCI_PXAV3
tristate "Marvell MMP2 SD Host Controller support (PXAV3)"
depends on CLKDEV_LOOKUP
select MMC_SDHCI
select MMC_SDHCI_IO_ACCESSORS
select MMC_SDHCI_PLTFM
default CPU_MMP2
help
This selects the Marvell(R) PXAV3 SD Host Controller.
If you have a MMP2 platform with SD Host Controller
and a card slot, say Y or M here.
If unsure, say N.
config MMC_SDHCI_PXAV2
tristate "Marvell PXA9XX SD Host Controller support (PXAV2)"
depends on CLKDEV_LOOKUP
select MMC_SDHCI
select MMC_SDHCI_PLTFM
default CPU_PXA910
help
This selects the Marvell(R) PXA168/PXA910/MMP2 SD Host Controller.
If you have a PXA168/PXA910/MMP2 platform with SD Host Controller
This selects the Marvell(R) PXAV2 SD Host Controller.
If you have a PXA9XX platform with SD Host Controller
and a card slot, say Y or M here.
If unsure, say N.
......@@ -281,13 +298,12 @@ config MMC_ATMELMCI
endchoice
config MMC_ATMELMCI_DMA
bool "Atmel MCI DMA support (EXPERIMENTAL)"
depends on MMC_ATMELMCI && (AVR32 || ARCH_AT91SAM9G45) && DMA_ENGINE && EXPERIMENTAL
bool "Atmel MCI DMA support"
depends on MMC_ATMELMCI && (AVR32 || ARCH_AT91SAM9G45) && DMA_ENGINE
help
Say Y here to have the Atmel MCI driver use a DMA engine to
do data transfers and thus increase the throughput and
reduce the CPU utilization. Note that this is highly
experimental and may cause the driver to lock up.
reduce the CPU utilization.
If unsure, say N.
......
......@@ -9,7 +9,8 @@ obj-$(CONFIG_MMC_MXC) += mxcmmc.o
obj-$(CONFIG_MMC_MXS) += mxs-mmc.o
obj-$(CONFIG_MMC_SDHCI) += sdhci.o
obj-$(CONFIG_MMC_SDHCI_PCI) += sdhci-pci.o
obj-$(CONFIG_MMC_SDHCI_PXA) += sdhci-pxa.o
obj-$(CONFIG_MMC_SDHCI_PXAV3) += sdhci-pxav3.o
obj-$(CONFIG_MMC_SDHCI_PXAV2) += sdhci-pxav2.o
obj-$(CONFIG_MMC_SDHCI_S3C) += sdhci-s3c.o
obj-$(CONFIG_MMC_SDHCI_SPEAR) += sdhci-spear.o
obj-$(CONFIG_MMC_WBSD) += wbsd.o
......@@ -31,9 +32,7 @@ obj-$(CONFIG_MMC_SDRICOH_CS) += sdricoh_cs.o
obj-$(CONFIG_MMC_TMIO) += tmio_mmc.o
obj-$(CONFIG_MMC_TMIO_CORE) += tmio_mmc_core.o
tmio_mmc_core-y := tmio_mmc_pio.o
ifneq ($(CONFIG_MMC_SDHI),n)
tmio_mmc_core-y += tmio_mmc_dma.o
endif
tmio_mmc_core-$(subst m,y,$(CONFIG_MMC_SDHI)) += tmio_mmc_dma.o
obj-$(CONFIG_MMC_SDHI) += sh_mobile_sdhi.o
obj-$(CONFIG_MMC_CB710) += cb710-mmc.o
obj-$(CONFIG_MMC_VIA_SDMMC) += via-sdmmc.o
......@@ -44,17 +43,13 @@ obj-$(CONFIG_MMC_JZ4740) += jz4740_mmc.o
obj-$(CONFIG_MMC_VUB300) += vub300.o
obj-$(CONFIG_MMC_USHC) += ushc.o
obj-$(CONFIG_MMC_SDHCI_PLTFM) += sdhci-platform.o
sdhci-platform-y := sdhci-pltfm.o
sdhci-platform-$(CONFIG_MMC_SDHCI_CNS3XXX) += sdhci-cns3xxx.o
sdhci-platform-$(CONFIG_MMC_SDHCI_ESDHC_IMX) += sdhci-esdhc-imx.o
sdhci-platform-$(CONFIG_MMC_SDHCI_DOVE) += sdhci-dove.o
sdhci-platform-$(CONFIG_MMC_SDHCI_TEGRA) += sdhci-tegra.o
obj-$(CONFIG_MMC_SDHCI_OF) += sdhci-of.o
sdhci-of-y := sdhci-of-core.o
sdhci-of-$(CONFIG_MMC_SDHCI_OF_ESDHC) += sdhci-of-esdhc.o
sdhci-of-$(CONFIG_MMC_SDHCI_OF_HLWD) += sdhci-of-hlwd.o
obj-$(CONFIG_MMC_SDHCI_PLTFM) += sdhci-pltfm.o
obj-$(CONFIG_MMC_SDHCI_CNS3XXX) += sdhci-cns3xxx.o
obj-$(CONFIG_MMC_SDHCI_ESDHC_IMX) += sdhci-esdhc-imx.o
obj-$(CONFIG_MMC_SDHCI_DOVE) += sdhci-dove.o
obj-$(CONFIG_MMC_SDHCI_TEGRA) += sdhci-tegra.o
obj-$(CONFIG_MMC_SDHCI_OF_ESDHC) += sdhci-of-esdhc.o
obj-$(CONFIG_MMC_SDHCI_OF_HLWD) += sdhci-of-hlwd.o
ifeq ($(CONFIG_CB710_DEBUG),y)
CFLAGS-cb710-mmc += -DDEBUG
......
......@@ -77,7 +77,8 @@
#include <mach/board.h>
#include <mach/cpu.h>
#include <mach/at91_mci.h>
#include "at91_mci.h"
#define DRIVER_NAME "at91_mci"
......
/*
* arch/arm/mach-at91/include/mach/at91_mci.h
* drivers/mmc/host/at91_mci.h
*
* Copyright (C) 2005 Ivan Kokshaysky
* Copyright (C) SAN People
......
......@@ -203,6 +203,7 @@ struct atmel_mci_slot {
#define ATMCI_CARD_PRESENT 0
#define ATMCI_CARD_NEED_INIT 1
#define ATMCI_SHUTDOWN 2
#define ATMCI_SUSPENDED 3
int detect_pin;
int wp_pin;
......@@ -1878,10 +1879,72 @@ static int __exit atmci_remove(struct platform_device *pdev)
return 0;
}
#ifdef CONFIG_PM
static int atmci_suspend(struct device *dev)
{
struct atmel_mci *host = dev_get_drvdata(dev);
int i;
for (i = 0; i < ATMEL_MCI_MAX_NR_SLOTS; i++) {
struct atmel_mci_slot *slot = host->slot[i];
int ret;
if (!slot)
continue;
ret = mmc_suspend_host(slot->mmc);
if (ret < 0) {
while (--i >= 0) {
slot = host->slot[i];
if (slot
&& test_bit(ATMCI_SUSPENDED, &slot->flags)) {
mmc_resume_host(host->slot[i]->mmc);
clear_bit(ATMCI_SUSPENDED, &slot->flags);
}
}
return ret;
} else {
set_bit(ATMCI_SUSPENDED, &slot->flags);
}
}
return 0;
}
static int atmci_resume(struct device *dev)
{
struct atmel_mci *host = dev_get_drvdata(dev);
int i;
int ret = 0;
for (i = 0; i < ATMEL_MCI_MAX_NR_SLOTS; i++) {
struct atmel_mci_slot *slot = host->slot[i];
int err;
slot = host->slot[i];
if (!slot)
continue;
if (!test_bit(ATMCI_SUSPENDED, &slot->flags))
continue;
err = mmc_resume_host(slot->mmc);
if (err < 0)
ret = err;
else
clear_bit(ATMCI_SUSPENDED, &slot->flags);
}
return ret;
}
static SIMPLE_DEV_PM_OPS(atmci_pm, atmci_suspend, atmci_resume);
#define ATMCI_PM_OPS (&atmci_pm)
#else
#define ATMCI_PM_OPS NULL
#endif
static struct platform_driver atmci_driver = {
.remove = __exit_p(atmci_remove),
.driver = {
.name = "atmel_mci",
.pm = ATMCI_PM_OPS,
},
};
......
This diff is collapsed.
......@@ -118,7 +118,6 @@
#define SDMMC_CMD_INDX(n) ((n) & 0x1F)
/* Status register defines */
#define SDMMC_GET_FCNT(x) (((x)>>17) & 0x1FF)
#define SDMMC_FIFO_SZ 32
/* Internal DMAC interrupt defines */
#define SDMMC_IDMAC_INT_AI BIT(9)
#define SDMMC_IDMAC_INT_NI BIT(8)
......@@ -134,22 +133,22 @@
/* Register access macros */
#define mci_readl(dev, reg) \
__raw_readl(dev->regs + SDMMC_##reg)
__raw_readl((dev)->regs + SDMMC_##reg)
#define mci_writel(dev, reg, value) \
__raw_writel((value), dev->regs + SDMMC_##reg)
__raw_writel((value), (dev)->regs + SDMMC_##reg)
/* 16-bit FIFO access macros */
#define mci_readw(dev, reg) \
__raw_readw(dev->regs + SDMMC_##reg)
__raw_readw((dev)->regs + SDMMC_##reg)
#define mci_writew(dev, reg, value) \
__raw_writew((value), dev->regs + SDMMC_##reg)
__raw_writew((value), (dev)->regs + SDMMC_##reg)
/* 64-bit FIFO access macros */
#ifdef readq
#define mci_readq(dev, reg) \
__raw_readq(dev->regs + SDMMC_##reg)
__raw_readq((dev)->regs + SDMMC_##reg)
#define mci_writeq(dev, reg, value) \
__raw_writeq((value), dev->regs + SDMMC_##reg)
__raw_writeq((value), (dev)->regs + SDMMC_##reg)
#else
/*
* Dummy readq implementation for architectures that don't define it.
......@@ -160,9 +159,9 @@
* rest of the code free from ifdefs.
*/
#define mci_readq(dev, reg) \
(*(volatile u64 __force *)(dev->regs + SDMMC_##reg))
(*(volatile u64 __force *)((dev)->regs + SDMMC_##reg))
#define mci_writeq(dev, reg, value) \
(*(volatile u64 __force *)(dev->regs + SDMMC_##reg) = value)
(*(volatile u64 __force *)((dev)->regs + SDMMC_##reg) = (value))
#endif
#endif /* _DW_MMC_H_ */
......@@ -226,6 +226,9 @@ static void __devinit mmci_dma_setup(struct mmci_host *host)
return;
}
/* initialize pre request cookie */
host->next_data.cookie = 1;
/* Try to acquire a generic DMA engine slave channel */
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);
......@@ -335,7 +338,8 @@ static void mmci_dma_unmap(struct mmci_host *host, struct mmc_data *data)
dir = DMA_FROM_DEVICE;
}
dma_unmap_sg(chan->device->dev, data->sg, data->sg_len, dir);
if (!data->host_cookie)
dma_unmap_sg(chan->device->dev, data->sg, data->sg_len, dir);
/*
* Use of DMA with scatter-gather is impossible.
......@@ -353,7 +357,8 @@ static void mmci_dma_data_error(struct mmci_host *host)
dmaengine_terminate_all(host->dma_current);
}
static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl)
static int mmci_dma_prep_data(struct mmci_host *host, struct mmc_data *data,
struct mmci_host_next *next)
{
struct variant_data *variant = host->variant;
struct dma_slave_config conf = {
......@@ -364,13 +369,20 @@ static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl)
.src_maxburst = variant->fifohalfsize >> 2, /* # of words */
.dst_maxburst = variant->fifohalfsize >> 2, /* # of words */
};
struct mmc_data *data = host->data;
struct dma_chan *chan;
struct dma_device *device;
struct dma_async_tx_descriptor *desc;
int nr_sg;
host->dma_current = NULL;
/* Check if next job is already prepared */
if (data->host_cookie && !next &&
host->dma_current && host->dma_desc_current)
return 0;
if (!next) {
host->dma_current = NULL;
host->dma_desc_current = NULL;
}
if (data->flags & MMC_DATA_READ) {
conf.direction = DMA_FROM_DEVICE;
......@@ -385,7 +397,7 @@ static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl)
return -EINVAL;
/* If less than or equal to the fifo size, don't bother with DMA */
if (host->size <= variant->fifosize)
if (data->blksz * data->blocks <= variant->fifosize)
return -EINVAL;
device = chan->device;
......@@ -399,14 +411,38 @@ static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl)
if (!desc)
goto unmap_exit;
/* Okay, go for it. */
host->dma_current = chan;
if (next) {
next->dma_chan = chan;
next->dma_desc = desc;
} else {
host->dma_current = chan;
host->dma_desc_current = desc;
}
return 0;
unmap_exit:
if (!next)
dmaengine_terminate_all(chan);
dma_unmap_sg(device->dev, data->sg, data->sg_len, conf.direction);
return -ENOMEM;
}
static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl)
{
int ret;
struct mmc_data *data = host->data;
ret = mmci_dma_prep_data(host, host->data, NULL);
if (ret)
return ret;
/* Okay, go for it. */
dev_vdbg(mmc_dev(host->mmc),
"Submit MMCI DMA job, sglen %d blksz %04x blks %04x flags %08x\n",
data->sg_len, data->blksz, data->blocks, data->flags);
dmaengine_submit(desc);
dma_async_issue_pending(chan);
dmaengine_submit(host->dma_desc_current);
dma_async_issue_pending(host->dma_current);
datactrl |= MCI_DPSM_DMAENABLE;
......@@ -421,14 +457,90 @@ static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl)
writel(readl(host->base + MMCIMASK0) | MCI_DATAENDMASK,
host->base + MMCIMASK0);
return 0;
}
unmap_exit:
dmaengine_terminate_all(chan);
dma_unmap_sg(device->dev, data->sg, data->sg_len, conf.direction);
return -ENOMEM;
static void mmci_get_next_data(struct mmci_host *host, struct mmc_data *data)
{
struct mmci_host_next *next = &host->next_data;
if (data->host_cookie && data->host_cookie != next->cookie) {
printk(KERN_WARNING "[%s] invalid cookie: data->host_cookie %d"
" host->next_data.cookie %d\n",
__func__, data->host_cookie, host->next_data.cookie);
data->host_cookie = 0;
}
if (!data->host_cookie)
return;
host->dma_desc_current = next->dma_desc;
host->dma_current = next->dma_chan;
next->dma_desc = NULL;
next->dma_chan = NULL;
}
static void mmci_pre_request(struct mmc_host *mmc, struct mmc_request *mrq,
bool is_first_req)
{
struct mmci_host *host = mmc_priv(mmc);
struct mmc_data *data = mrq->data;
struct mmci_host_next *nd = &host->next_data;
if (!data)
return;
if (data->host_cookie) {
data->host_cookie = 0;
return;
}
/* if config for dma */
if (((data->flags & MMC_DATA_WRITE) && host->dma_tx_channel) ||
((data->flags & MMC_DATA_READ) && host->dma_rx_channel)) {
if (mmci_dma_prep_data(host, data, nd))
data->host_cookie = 0;
else
data->host_cookie = ++nd->cookie < 0 ? 1 : nd->cookie;
}
}
static void mmci_post_request(struct mmc_host *mmc, struct mmc_request *mrq,
int err)
{
struct mmci_host *host = mmc_priv(mmc);
struct mmc_data *data = mrq->data;
struct dma_chan *chan;
enum dma_data_direction dir;
if (!data)
return;
if (data->flags & MMC_DATA_READ) {
dir = DMA_FROM_DEVICE;
chan = host->dma_rx_channel;
} else {
dir = DMA_TO_DEVICE;
chan = host->dma_tx_channel;
}
/* if config for dma */
if (chan) {
if (err)
dmaengine_terminate_all(chan);
if (err || data->host_cookie)
dma_unmap_sg(mmc_dev(host->mmc), data->sg,
data->sg_len, dir);
mrq->data->host_cookie = 0;
}
}
#else
/* Blank functions if the DMA engine is not available */
static void mmci_get_next_data(struct mmci_host *host, struct mmc_data *data)
{
}
static inline void mmci_dma_setup(struct mmci_host *host)
{
}
......@@ -449,6 +561,10 @@ static inline int mmci_dma_start_data(struct mmci_host *host, unsigned int datac
{
return -ENOSYS;
}
#define mmci_pre_request NULL
#define mmci_post_request NULL
#endif
static void mmci_start_data(struct mmci_host *host, struct mmc_data *data)
......@@ -872,6 +988,9 @@ static void mmci_request(struct mmc_host *mmc, struct mmc_request *mrq)
host->mrq = mrq;
if (mrq->data)
mmci_get_next_data(host, mrq->data);
if (mrq->data && mrq->data->flags & MMC_DATA_READ)
mmci_start_data(host, mrq->data);
......@@ -986,6 +1105,8 @@ static irqreturn_t mmci_cd_irq(int irq, void *dev_id)
static const struct mmc_host_ops mmci_ops = {
.request = mmci_request,
.pre_req = mmci_pre_request,
.post_req = mmci_post_request,
.set_ios = mmci_set_ios,
.get_ro = mmci_get_ro,
.get_cd = mmci_get_cd,
......
......@@ -166,6 +166,12 @@ struct clk;
struct variant_data;
struct dma_chan;
struct mmci_host_next {
struct dma_async_tx_descriptor *dma_desc;
struct dma_chan *dma_chan;
s32 cookie;
};
struct mmci_host {
phys_addr_t phybase;
void __iomem *base;
......@@ -203,6 +209,8 @@ struct mmci_host {
struct dma_chan *dma_current;
struct dma_chan *dma_rx_channel;
struct dma_chan *dma_tx_channel;
struct dma_async_tx_descriptor *dma_desc_current;
struct mmci_host_next next_data;
#define dma_inprogress(host) ((host)->dma_current)
#else
......
......@@ -564,40 +564,38 @@ static void mxs_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
static void mxs_mmc_set_clk_rate(struct mxs_mmc_host *host, unsigned int rate)
{
unsigned int ssp_rate, bit_rate;
u32 div1, div2;
unsigned int ssp_clk, ssp_sck;
u32 clock_divide, clock_rate;
u32 val;
ssp_rate = clk_get_rate(host->clk);
ssp_clk = clk_get_rate(host->clk);
for (div1 = 2; div1 < 254; div1 += 2) {
div2 = ssp_rate / rate / div1;
if (div2 < 0x100)
for (clock_divide = 2; clock_divide <= 254; clock_divide += 2) {
clock_rate = DIV_ROUND_UP(ssp_clk, rate * clock_divide);
clock_rate = (clock_rate > 0) ? clock_rate - 1 : 0;
if (clock_rate <= 255)
break;
}
if (div1 >= 254) {
if (clock_divide > 254) {
dev_err(mmc_dev(host->mmc),
"%s: cannot set clock to %d\n", __func__, rate);
return;
}
if (div2 == 0)
bit_rate = ssp_rate / div1;
else
bit_rate = ssp_rate / div1 / div2;
ssp_sck = ssp_clk / clock_divide / (1 + clock_rate);
val = readl(host->base + HW_SSP_TIMING);
val &= ~(BM_SSP_TIMING_CLOCK_DIVIDE | BM_SSP_TIMING_CLOCK_RATE);
val |= BF_SSP(div1, TIMING_CLOCK_DIVIDE);
val |= BF_SSP(div2 - 1, TIMING_CLOCK_RATE);
val |= BF_SSP(clock_divide, TIMING_CLOCK_DIVIDE);
val |= BF_SSP(clock_rate, TIMING_CLOCK_RATE);
writel(val, host->base + HW_SSP_TIMING);
host->clk_rate = bit_rate;
host->clk_rate = ssp_sck;
dev_dbg(mmc_dev(host->mmc),
"%s: div1 %d, div2 %d, ssp %d, bit %d, rate %d\n",
__func__, div1, div2, ssp_rate, bit_rate, rate);
"%s: clock_divide %d, clock_rate %d, ssp_clk %d, rate_actual %d, rate_requested %d\n",
__func__, clock_divide, clock_rate, ssp_clk, ssp_sck, rate);
}
static void mxs_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
......
This diff is collapsed.
......@@ -15,9 +15,7 @@
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/mmc/host.h>
#include <linux/mmc/sdhci-pltfm.h>
#include <mach/cns3xxx.h>
#include "sdhci.h"
#include "sdhci-pltfm.h"
static unsigned int sdhci_cns3xxx_get_max_clk(struct sdhci_host *host)
......@@ -86,7 +84,7 @@ static struct sdhci_ops sdhci_cns3xxx_ops = {
.set_clock = sdhci_cns3xxx_set_clock,
};
struct sdhci_pltfm_data sdhci_cns3xxx_pdata = {
static struct sdhci_pltfm_data sdhci_cns3xxx_pdata = {
.ops = &sdhci_cns3xxx_ops,
.quirks = SDHCI_QUIRK_BROKEN_DMA |
SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK |
......@@ -95,3 +93,43 @@ struct sdhci_pltfm_data sdhci_cns3xxx_pdata = {
SDHCI_QUIRK_BROKEN_TIMEOUT_VAL |
SDHCI_QUIRK_NONSTANDARD_CLOCK,
};
static int __devinit sdhci_cns3xxx_probe(struct platform_device *pdev)
{
return sdhci_pltfm_register(pdev, &sdhci_cns3xxx_pdata);
}
static int __devexit sdhci_cns3xxx_remove(struct platform_device *pdev)
{
return sdhci_pltfm_unregister(pdev);
}
static struct platform_driver sdhci_cns3xxx_driver = {
.driver = {
.name = "sdhci-cns3xxx",
.owner = THIS_MODULE,
},
.probe = sdhci_cns3xxx_probe,
.remove = __devexit_p(sdhci_cns3xxx_remove),
#ifdef CONFIG_PM
.suspend = sdhci_pltfm_suspend,
.resume = sdhci_pltfm_resume,
#endif
};
static int __init sdhci_cns3xxx_init(void)
{
return platform_driver_register(&sdhci_cns3xxx_driver);
}
module_init(sdhci_cns3xxx_init);
static void __exit sdhci_cns3xxx_exit(void)
{
platform_driver_unregister(&sdhci_cns3xxx_driver);
}
module_exit(sdhci_cns3xxx_exit);
MODULE_DESCRIPTION("SDHCI driver for CNS3xxx");
MODULE_AUTHOR("Scott Shu, "
"Anton Vorontsov <avorontsov@mvista.com>");
MODULE_LICENSE("GPL v2");
......@@ -22,7 +22,6 @@
#include <linux/io.h>
#include <linux/mmc/host.h>
#include "sdhci.h"
#include "sdhci-pltfm.h"
static u16 sdhci_dove_readw(struct sdhci_host *host, int reg)
......@@ -61,10 +60,50 @@ static struct sdhci_ops sdhci_dove_ops = {
.read_l = sdhci_dove_readl,
};
struct sdhci_pltfm_data sdhci_dove_pdata = {
static struct sdhci_pltfm_data sdhci_dove_pdata = {
.ops = &sdhci_dove_ops,
.quirks = SDHCI_QUIRK_NO_SIMULT_VDD_AND_POWER |
SDHCI_QUIRK_NO_BUSY_IRQ |
SDHCI_QUIRK_BROKEN_TIMEOUT_VAL |
SDHCI_QUIRK_FORCE_DMA,
};
static int __devinit sdhci_dove_probe(struct platform_device *pdev)
{
return sdhci_pltfm_register(pdev, &sdhci_dove_pdata);
}
static int __devexit sdhci_dove_remove(struct platform_device *pdev)
{
return sdhci_pltfm_unregister(pdev);
}
static struct platform_driver sdhci_dove_driver = {
.driver = {
.name = "sdhci-dove",
.owner = THIS_MODULE,
},
.probe = sdhci_dove_probe,
.remove = __devexit_p(sdhci_dove_remove),
#ifdef CONFIG_PM
.suspend = sdhci_pltfm_suspend,
.resume = sdhci_pltfm_resume,
#endif
};
static int __init sdhci_dove_init(void)
{
return platform_driver_register(&sdhci_dove_driver);
}
module_init(sdhci_dove_init);
static void __exit sdhci_dove_exit(void)
{
platform_driver_unregister(&sdhci_dove_driver);
}
module_exit(sdhci_dove_exit);
MODULE_DESCRIPTION("SDHCI driver for Dove");
MODULE_AUTHOR("Saeed Bishara <saeed@marvell.com>, "
"Mike Rapoport <mike@compulab.co.il>");
MODULE_LICENSE("GPL v2");
This diff is collapsed.
This diff is collapsed.
......@@ -16,8 +16,7 @@
#include <linux/io.h>
#include <linux/delay.h>
#include <linux/mmc/host.h>
#include "sdhci-of.h"
#include "sdhci.h"
#include "sdhci-pltfm.h"
#include "sdhci-esdhc.h"
static u16 esdhc_readw(struct sdhci_host *host, int reg)
......@@ -60,32 +59,83 @@ static int esdhc_of_enable_dma(struct sdhci_host *host)
static unsigned int esdhc_of_get_max_clock(struct sdhci_host *host)
{
struct sdhci_of_host *of_host = sdhci_priv(host);
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
return of_host->clock;
return pltfm_host->clock;
}
static unsigned int esdhc_of_get_min_clock(struct sdhci_host *host)
{
struct sdhci_of_host *of_host = sdhci_priv(host);
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
return of_host->clock / 256 / 16;
return pltfm_host->clock / 256 / 16;
}
struct sdhci_of_data sdhci_esdhc = {
static struct sdhci_ops sdhci_esdhc_ops = {
.read_l = sdhci_be32bs_readl,
.read_w = esdhc_readw,
.read_b = sdhci_be32bs_readb,
.write_l = sdhci_be32bs_writel,
.write_w = esdhc_writew,
.write_b = esdhc_writeb,
.set_clock = esdhc_set_clock,
.enable_dma = esdhc_of_enable_dma,
.get_max_clock = esdhc_of_get_max_clock,
.get_min_clock = esdhc_of_get_min_clock,
};
static struct sdhci_pltfm_data sdhci_esdhc_pdata = {
/* card detection could be handled via GPIO */
.quirks = ESDHC_DEFAULT_QUIRKS | SDHCI_QUIRK_BROKEN_CARD_DETECTION
| SDHCI_QUIRK_NO_CARD_NO_RESET,
.ops = {
.read_l = sdhci_be32bs_readl,
.read_w = esdhc_readw,
.read_b = sdhci_be32bs_readb,
.write_l = sdhci_be32bs_writel,
.write_w = esdhc_writew,
.write_b = esdhc_writeb,
.set_clock = esdhc_set_clock,
.enable_dma = esdhc_of_enable_dma,
.get_max_clock = esdhc_of_get_max_clock,
.get_min_clock = esdhc_of_get_min_clock,
.ops = &sdhci_esdhc_ops,
};
static int __devinit sdhci_esdhc_probe(struct platform_device *pdev)
{
return sdhci_pltfm_register(pdev, &sdhci_esdhc_pdata);
}
static int __devexit sdhci_esdhc_remove(struct platform_device *pdev)
{
return sdhci_pltfm_unregister(pdev);
}
static const struct of_device_id sdhci_esdhc_of_match[] = {
{ .compatible = "fsl,mpc8379-esdhc" },
{ .compatible = "fsl,mpc8536-esdhc" },
{ .compatible = "fsl,esdhc" },
{ }
};
MODULE_DEVICE_TABLE(of, sdhci_esdhc_of_match);
static struct platform_driver sdhci_esdhc_driver = {
.driver = {
.name = "sdhci-esdhc",
.owner = THIS_MODULE,
.of_match_table = sdhci_esdhc_of_match,
},
.probe = sdhci_esdhc_probe,
.remove = __devexit_p(sdhci_esdhc_remove),
#ifdef CONFIG_PM
.suspend = sdhci_pltfm_suspend,
.resume = sdhci_pltfm_resume,
#endif
};
static int __init sdhci_esdhc_init(void)
{
return platform_driver_register(&sdhci_esdhc_driver);
}
module_init(sdhci_esdhc_init);
static void __exit sdhci_esdhc_exit(void)
{
platform_driver_unregister(&sdhci_esdhc_driver);
}
module_exit(sdhci_esdhc_exit);
MODULE_DESCRIPTION("SDHCI OF driver for Freescale MPC eSDHC");
MODULE_AUTHOR("Xiaobo Xie <X.Xie@freescale.com>, "
"Anton Vorontsov <avorontsov@ru.mvista.com>");
MODULE_LICENSE("GPL v2");
......@@ -21,8 +21,7 @@
#include <linux/delay.h>
#include <linux/mmc/host.h>
#include "sdhci-of.h"
#include "sdhci.h"
#include "sdhci-pltfm.h"
/*
* Ops and quirks for the Nintendo Wii SDHCI controllers.
......@@ -51,15 +50,63 @@ static void sdhci_hlwd_writeb(struct sdhci_host *host, u8 val, int reg)
udelay(SDHCI_HLWD_WRITE_DELAY);
}
struct sdhci_of_data sdhci_hlwd = {
static struct sdhci_ops sdhci_hlwd_ops = {
.read_l = sdhci_be32bs_readl,
.read_w = sdhci_be32bs_readw,
.read_b = sdhci_be32bs_readb,
.write_l = sdhci_hlwd_writel,
.write_w = sdhci_hlwd_writew,
.write_b = sdhci_hlwd_writeb,
};
static struct sdhci_pltfm_data sdhci_hlwd_pdata = {
.quirks = SDHCI_QUIRK_32BIT_DMA_ADDR |
SDHCI_QUIRK_32BIT_DMA_SIZE,
.ops = {
.read_l = sdhci_be32bs_readl,
.read_w = sdhci_be32bs_readw,
.read_b = sdhci_be32bs_readb,
.write_l = sdhci_hlwd_writel,
.write_w = sdhci_hlwd_writew,
.write_b = sdhci_hlwd_writeb,
.ops = &sdhci_hlwd_ops,
};
static int __devinit sdhci_hlwd_probe(struct platform_device *pdev)
{
return sdhci_pltfm_register(pdev, &sdhci_hlwd_pdata);
}
static int __devexit sdhci_hlwd_remove(struct platform_device *pdev)
{
return sdhci_pltfm_unregister(pdev);
}
static const struct of_device_id sdhci_hlwd_of_match[] = {
{ .compatible = "nintendo,hollywood-sdhci" },
{ }
};
MODULE_DEVICE_TABLE(of, sdhci_hlwd_of_match);
static struct platform_driver sdhci_hlwd_driver = {
.driver = {
.name = "sdhci-hlwd",
.owner = THIS_MODULE,
.of_match_table = sdhci_hlwd_of_match,
},
.probe = sdhci_hlwd_probe,
.remove = __devexit_p(sdhci_hlwd_remove),
#ifdef CONFIG_PM
.suspend = sdhci_pltfm_suspend,
.resume = sdhci_pltfm_resume,
#endif
};
static int __init sdhci_hlwd_init(void)
{
return platform_driver_register(&sdhci_hlwd_driver);
}
module_init(sdhci_hlwd_init);
static void __exit sdhci_hlwd_exit(void)
{
platform_driver_unregister(&sdhci_hlwd_driver);
}
module_exit(sdhci_hlwd_exit);
MODULE_DESCRIPTION("Nintendo Wii SDHCI OF driver");
MODULE_AUTHOR("The GameCube Linux Team, Albert Herranz");
MODULE_LICENSE("GPL v2");
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment