Commit e413a19a authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'for-linus-20140610' of git://git.infradead.org/linux-mtd

Pull MTD updates from Brian Norris:
 - refactor m25p80.c driver for use as a general SPI NOR framework for
   other drivers which may speak to SPI NOR flash without providing full
   SPI support (i.e., not part of drivers/spi/)
 - new Freescale QuadSPI driver (utilizing new SPI NOR framework)
 - updates for the STMicro "FSM" SPI NOR driver
 - fix sync/flush behavior on mtd_blkdevs
 - fixup subpage write support on a few NAND drivers
 - correct the MTD OOB test for odd-sized OOB areas
 - add BCH-16 support for OMAP NAND
 - fix warnings and trivial refactoring
 - utilize new ECC DT bindings in pxa3xx NAND driver
 - new LPDDR NVM driver
 - address a few assorted bugs caught by Coverity
 - add new imx6sx support for GPMI NAND
 - use a bounce buffer for NAND when non-DMA-able buffers are used

* tag 'for-linus-20140610' of git://git.infradead.org/linux-mtd: (77 commits)
  mtd: gpmi: add gpmi support for imx6sx
  mtd: maps: remove check for CONFIG_MTD_SUPERH_RESERVE
  mtd: bf5xx_nand: use the managed version of kzalloc
  mtd: pxa3xx_nand: make the driver work on big-endian systems
  mtd: nand: omap: fix omap_calculate_ecc_bch() for-loop error
  mtd: nand: r852: correct write_buf loop bounds
  mtd: nand_bbt: handle error case for nand_create_badblock_pattern()
  mtd: nand_bbt: remove unused variable
  mtd: maps: sc520cdp: fix warnings
  mtd: slram: fix unused variable warning
  mtd: pfow: remove unused variable
  mtd: lpddr: fix Kconfig dependency, for I/O accessors
  mtd: nand: pxa3xx: Add supported ECC strength and step size to the DT binding
  mtd: nand: pxa3xx: Use ECC strength and step size devicetree binding
  mtd: nand: pxa3xx: Clean pxa_ecc_init() error handling
  mtd: nand: Warn the user if the selected ECC strength is too weak
  mtd: nand: omap: Documentation: How to select correct ECC scheme for your device ?
  mtd: nand: omap: add support for BCH16_ECC - NAND driver updates
  mtd: nand: omap: add support for BCH16_ECC - ELM driver updates
  mtd: nand: omap: add support for BCH16_ECC - GPMC driver updates
  ...
parents 8d0304e6 f1900c79
* Freescale Quad Serial Peripheral Interface(QuadSPI)
Required properties:
- compatible : Should be "fsl,vf610-qspi"
- reg : the first contains the register location and length,
the second contains the memory mapping address and length
- reg-names: Should contain the reg names "QuadSPI" and "QuadSPI-memory"
- interrupts : Should contain the interrupt for the device
- clocks : The clocks needed by the QuadSPI controller
- clock-names : the name of the clocks
Optional properties:
- fsl,qspi-has-second-chip: The controller has two buses, bus A and bus B.
Each bus can be connected with two NOR flashes.
Most of the time, each bus only has one NOR flash
connected, this is the default case.
But if there are two NOR flashes connected to the
bus, you should enable this property.
(Please check the board's schematic.)
Example:
qspi0: quadspi@40044000 {
compatible = "fsl,vf610-qspi";
reg = <0x40044000 0x1000>, <0x20000000 0x10000000>;
reg-names = "QuadSPI", "QuadSPI-memory";
interrupts = <0 24 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clks VF610_CLK_QSPI0_EN>,
<&clks VF610_CLK_QSPI0>;
clock-names = "qspi_en", "qspi";
flash0: s25fl128s@0 {
....
};
};
......@@ -28,6 +28,8 @@ Optional properties:
"ham1" 1-bit Hamming ecc code
"bch4" 4-bit BCH ecc code
"bch8" 8-bit BCH ecc code
"bch16" 16-bit BCH ECC code
Refer below "How to select correct ECC scheme for your device ?"
- ti,nand-xfer-type: A string setting the data transfer type. One of:
......@@ -90,3 +92,46 @@ Example for an AM33xx board:
};
};
How to select correct ECC scheme for your device ?
--------------------------------------------------
Higher ECC scheme usually means better protection against bit-flips and
increased system lifetime. However, selection of ECC scheme is dependent
on various other factors also like;
(1) support of built in hardware engines.
Some legacy OMAP SoC do not have ELM harware engine, so those SoC cannot
support ecc-schemes with hardware error-correction (BCHx_HW). However
such SoC can use ecc-schemes with software library for error-correction
(BCHx_HW_DETECTION_SW). The error correction capability with software
library remains equivalent to their hardware counter-part, but there is
slight CPU penalty when too many bit-flips are detected during reads.
(2) Device parameters like OOBSIZE.
Other factor which governs the selection of ecc-scheme is oob-size.
Higher ECC schemes require more OOB/Spare area to store ECC syndrome,
so the device should have enough free bytes available its OOB/Spare
area to accomodate ECC for entire page. In general following expression
helps in determining if given device can accomodate ECC syndrome:
"2 + (PAGESIZE / 512) * ECC_BYTES" >= OOBSIZE"
where
OOBSIZE number of bytes in OOB/spare area
PAGESIZE number of bytes in main-area of device page
ECC_BYTES number of ECC bytes generated to protect
512 bytes of data, which is:
'3' for HAM1_xx ecc schemes
'7' for BCH4_xx ecc schemes
'14' for BCH8_xx ecc schemes
'26' for BCH16_xx ecc schemes
Example(a): For a device with PAGESIZE = 2048 and OOBSIZE = 64 and
trying to use BCH16 (ECC_BYTES=26) ecc-scheme.
Number of ECC bytes per page = (2 + (2048 / 512) * 26) = 106 B
which is greater than capacity of NAND device (OOBSIZE=64)
Hence, BCH16 cannot be supported on given device. But it can
probably use lower ecc-schemes like BCH8.
Example(b): For a device with PAGESIZE = 2048 and OOBSIZE = 128 and
trying to use BCH16 (ECC_BYTES=26) ecc-scheme.
Number of ECC bytes per page = (2 + (2048 / 512) * 26) = 106 B
which can be accomodate in the OOB/Spare area of this device
(OOBSIZE=128). So this device can use BCH16 ecc-scheme.
......@@ -5,8 +5,8 @@ Required properties:
representing partitions.
- compatible : Should be the manufacturer and the name of the chip. Bear in mind
the DT binding is not Linux-only, but in case of Linux, see the
"m25p_ids" table in drivers/mtd/devices/m25p80.c for the list of
supported chips.
"spi_nor_ids" table in drivers/mtd/spi-nor/spi-nor.c for the list
of supported chips.
- reg : Chip-Select number
- spi-max-frequency : Maximum frequency of the SPI bus the chip can operate at
......
......@@ -17,6 +17,14 @@ Optional properties:
- num-cs: Number of chipselect lines to usw
- nand-on-flash-bbt: boolean to enable on flash bbt option if
not present false
- nand-ecc-strength: number of bits to correct per ECC step
- nand-ecc-step-size: number of data bytes covered by a single ECC step
The following ECC strength and step size are currently supported:
- nand-ecc-strength = <1>, nand-ecc-step-size = <512>
- nand-ecc-strength = <4>, nand-ecc-step-size = <512>
- nand-ecc-strength = <8>, nand-ecc-step-size = <512>
Example:
......
SPI NOR framework
============================================
Part I - Why do we need this framework?
---------------------------------------
SPI bus controllers (drivers/spi/) only deal with streams of bytes; the bus
controller operates agnostic of the specific device attached. However, some
controllers (such as Freescale's QuadSPI controller) cannot easily handle
arbitrary streams of bytes, but rather are designed specifically for SPI NOR.
In particular, Freescale's QuadSPI controller must know the NOR commands to
find the right LUT sequence. Unfortunately, the SPI subsystem has no notion of
opcodes, addresses, or data payloads; a SPI controller simply knows to send or
receive bytes (Tx and Rx). Therefore, we must define a new layering scheme under
which the controller driver is aware of the opcodes, addressing, and other
details of the SPI NOR protocol.
Part II - How does the framework work?
--------------------------------------
This framework just adds a new layer between the MTD and the SPI bus driver.
With this new layer, the SPI NOR controller driver does not depend on the
m25p80 code anymore.
Before this framework, the layer is like:
MTD
------------------------
m25p80
------------------------
SPI bus driver
------------------------
SPI NOR chip
After this framework, the layer is like:
MTD
------------------------
SPI NOR framework
------------------------
m25p80
------------------------
SPI bus driver
------------------------
SPI NOR chip
With the SPI NOR controller driver (Freescale QuadSPI), it looks like:
MTD
------------------------
SPI NOR framework
------------------------
fsl-quadSPI
------------------------
SPI NOR chip
Part III - How can drivers use the framework?
---------------------------------------------
The main API is spi_nor_scan(). Before you call the hook, a driver should
initialize the necessary fields for spi_nor{}. Please see
drivers/mtd/spi-nor/spi-nor.c for detail. Please also refer to fsl-quadspi.c
when you want to write a new driver for a SPI NOR controller.
......@@ -68,6 +68,9 @@
#define GPMC_ECC_BCH_RESULT_1 0x244 /* not available on OMAP2 */
#define GPMC_ECC_BCH_RESULT_2 0x248 /* not available on OMAP2 */
#define GPMC_ECC_BCH_RESULT_3 0x24c /* not available on OMAP2 */
#define GPMC_ECC_BCH_RESULT_4 0x300 /* not available on OMAP2 */
#define GPMC_ECC_BCH_RESULT_5 0x304 /* not available on OMAP2 */
#define GPMC_ECC_BCH_RESULT_6 0x308 /* not available on OMAP2 */
/* GPMC ECC control settings */
#define GPMC_ECC_CTRL_ECCCLEAR 0x100
......@@ -677,6 +680,12 @@ void gpmc_update_nand_reg(struct gpmc_nand_regs *reg, int cs)
GPMC_BCH_SIZE * i;
reg->gpmc_bch_result3[i] = gpmc_base + GPMC_ECC_BCH_RESULT_3 +
GPMC_BCH_SIZE * i;
reg->gpmc_bch_result4[i] = gpmc_base + GPMC_ECC_BCH_RESULT_4 +
i * GPMC_BCH_SIZE;
reg->gpmc_bch_result5[i] = gpmc_base + GPMC_ECC_BCH_RESULT_5 +
i * GPMC_BCH_SIZE;
reg->gpmc_bch_result6[i] = gpmc_base + GPMC_ECC_BCH_RESULT_6 +
i * GPMC_BCH_SIZE;
}
}
......@@ -1412,6 +1421,12 @@ static int gpmc_probe_nand_child(struct platform_device *pdev,
else
gpmc_nand_data->ecc_opt =
OMAP_ECC_BCH8_CODE_HW_DETECTION_SW;
else if (!strcmp(s, "bch16"))
if (gpmc_nand_data->elm_of_node)
gpmc_nand_data->ecc_opt =
OMAP_ECC_BCH16_CODE_HW;
else
pr_err("%s: BCH16 requires ELM support\n", __func__);
else
pr_err("%s: ti,nand-ecc-opt invalid value\n", __func__);
......
......@@ -321,6 +321,8 @@ source "drivers/mtd/onenand/Kconfig"
source "drivers/mtd/lpddr/Kconfig"
source "drivers/mtd/spi-nor/Kconfig"
source "drivers/mtd/ubi/Kconfig"
endif # MTD
......@@ -32,4 +32,5 @@ inftl-objs := inftlcore.o inftlmount.o
obj-y += chips/ lpddr/ maps/ devices/ nand/ onenand/ tests/
obj-$(CONFIG_MTD_SPI_NOR) += spi-nor/
obj-$(CONFIG_MTD_UBI) += ubi/
......@@ -169,33 +169,33 @@ config MTD_OTP
in the programming of OTP bits will waste them.
config MTD_CFI_INTELEXT
tristate "Support for Intel/Sharp flash chips"
tristate "Support for CFI command set 0001 (Intel/Sharp chips)"
depends on MTD_GEN_PROBE
select MTD_CFI_UTIL
help
The Common Flash Interface defines a number of different command
sets which a CFI-compliant chip may claim to implement. This code
provides support for one of those command sets, used on Intel
StrataFlash and other parts.
provides support for command set 0001, used on Intel StrataFlash
and other parts.
config MTD_CFI_AMDSTD
tristate "Support for AMD/Fujitsu/Spansion flash chips"
tristate "Support for CFI command set 0002 (AMD/Fujitsu/Spansion chips)"
depends on MTD_GEN_PROBE
select MTD_CFI_UTIL
help
The Common Flash Interface defines a number of different command
sets which a CFI-compliant chip may claim to implement. This code
provides support for one of those command sets, used on chips
including the AMD Am29LV320.
provides support for command set 0002, used on chips including
the AMD Am29LV320.
config MTD_CFI_STAA
tristate "Support for ST (Advanced Architecture) flash chips"
tristate "Support for CFI command set 0020 (ST (Advanced Architecture) chips)"
depends on MTD_GEN_PROBE
select MTD_CFI_UTIL
help
The Common Flash Interface defines a number of different command
sets which a CFI-compliant chip may claim to implement. This code
provides support for one of those command sets.
provides support for command set 0020.
config MTD_CFI_UTIL
tristate
......
......@@ -80,7 +80,7 @@ config MTD_DATAFLASH_OTP
config MTD_M25P80
tristate "Support most SPI Flash chips (AT26DF, M25P, W25X, ...)"
depends on SPI_MASTER
depends on SPI_MASTER && MTD_SPI_NOR
help
This enables access to most modern SPI flash chips, used for
program and data storage. Series supported include Atmel AT26DF,
......@@ -212,7 +212,7 @@ config MTD_DOCG3
config MTD_ST_SPI_FSM
tristate "ST Microelectronics SPI FSM Serial Flash Controller"
depends on ARM || SH
depends on ARCH_STI
help
This provides an MTD device driver for the ST Microelectronics
SPI Fast Sequence Mode (FSM) Serial Flash Controller and support
......
......@@ -213,6 +213,28 @@ static void elm_load_syndrome(struct elm_info *info,
val = cpu_to_be32(*(u32 *) &ecc[0]) >> 12;
elm_write_reg(info, offset, val);
break;
case BCH16_ECC:
val = cpu_to_be32(*(u32 *) &ecc[22]);
elm_write_reg(info, offset, val);
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[18]);
elm_write_reg(info, offset, val);
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[14]);
elm_write_reg(info, offset, val);
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[10]);
elm_write_reg(info, offset, val);
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[6]);
elm_write_reg(info, offset, val);
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[2]);
elm_write_reg(info, offset, val);
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[0]) >> 16;
elm_write_reg(info, offset, val);
break;
default:
pr_err("invalid config bch_type\n");
}
......@@ -418,6 +440,7 @@ static int elm_remove(struct platform_device *pdev)
return 0;
}
#ifdef CONFIG_PM_SLEEP
/**
* elm_context_save
* saves ELM configurations to preserve them across Hardware powered-down
......@@ -435,6 +458,13 @@ static int elm_context_save(struct elm_info *info)
for (i = 0; i < ERROR_VECTOR_MAX; i++) {
offset = i * SYNDROME_FRAGMENT_REG_SIZE;
switch (bch_type) {
case BCH16_ECC:
regs->elm_syndrome_fragment_6[i] = elm_read_reg(info,
ELM_SYNDROME_FRAGMENT_6 + offset);
regs->elm_syndrome_fragment_5[i] = elm_read_reg(info,
ELM_SYNDROME_FRAGMENT_5 + offset);
regs->elm_syndrome_fragment_4[i] = elm_read_reg(info,
ELM_SYNDROME_FRAGMENT_4 + offset);
case BCH8_ECC:
regs->elm_syndrome_fragment_3[i] = elm_read_reg(info,
ELM_SYNDROME_FRAGMENT_3 + offset);
......@@ -473,6 +503,13 @@ static int elm_context_restore(struct elm_info *info)
for (i = 0; i < ERROR_VECTOR_MAX; i++) {
offset = i * SYNDROME_FRAGMENT_REG_SIZE;
switch (bch_type) {
case BCH16_ECC:
elm_write_reg(info, ELM_SYNDROME_FRAGMENT_6 + offset,
regs->elm_syndrome_fragment_6[i]);
elm_write_reg(info, ELM_SYNDROME_FRAGMENT_5 + offset,
regs->elm_syndrome_fragment_5[i]);
elm_write_reg(info, ELM_SYNDROME_FRAGMENT_4 + offset,
regs->elm_syndrome_fragment_4[i]);
case BCH8_ECC:
elm_write_reg(info, ELM_SYNDROME_FRAGMENT_3 + offset,
regs->elm_syndrome_fragment_3[i]);
......@@ -509,6 +546,7 @@ static int elm_resume(struct device *dev)
elm_context_restore(info);
return 0;
}
#endif
static SIMPLE_DEV_PM_OPS(elm_pm_ops, elm_suspend, elm_resume);
......
......@@ -19,485 +19,98 @@
#include <linux/errno.h>
#include <linux/module.h>
#include <linux/device.h>
#include <linux/interrupt.h>
#include <linux/mutex.h>
#include <linux/math64.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/mod_devicetable.h>
#include <linux/mtd/cfi.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
#include <linux/of_platform.h>
#include <linux/spi/spi.h>
#include <linux/spi/flash.h>
#include <linux/mtd/spi-nor.h>
/* Flash opcodes. */
#define OPCODE_WREN 0x06 /* Write enable */
#define OPCODE_RDSR 0x05 /* Read status register */
#define OPCODE_WRSR 0x01 /* Write status register 1 byte */
#define OPCODE_NORM_READ 0x03 /* Read data bytes (low frequency) */
#define OPCODE_FAST_READ 0x0b /* Read data bytes (high frequency) */
#define OPCODE_DUAL_READ 0x3b /* Read data bytes (Dual SPI) */
#define OPCODE_QUAD_READ 0x6b /* Read data bytes (Quad SPI) */
#define OPCODE_PP 0x02 /* Page program (up to 256 bytes) */
#define OPCODE_BE_4K 0x20 /* Erase 4KiB block */
#define OPCODE_BE_4K_PMC 0xd7 /* Erase 4KiB block on PMC chips */
#define OPCODE_BE_32K 0x52 /* Erase 32KiB block */
#define OPCODE_CHIP_ERASE 0xc7 /* Erase whole flash chip */
#define OPCODE_SE 0xd8 /* Sector erase (usually 64KiB) */
#define OPCODE_RDID 0x9f /* Read JEDEC ID */
#define OPCODE_RDCR 0x35 /* Read configuration register */
/* 4-byte address opcodes - used on Spansion and some Macronix flashes. */
#define OPCODE_NORM_READ_4B 0x13 /* Read data bytes (low frequency) */
#define OPCODE_FAST_READ_4B 0x0c /* Read data bytes (high frequency) */
#define OPCODE_DUAL_READ_4B 0x3c /* Read data bytes (Dual SPI) */
#define OPCODE_QUAD_READ_4B 0x6c /* Read data bytes (Quad SPI) */
#define OPCODE_PP_4B 0x12 /* Page program (up to 256 bytes) */
#define OPCODE_SE_4B 0xdc /* Sector erase (usually 64KiB) */
/* Used for SST flashes only. */
#define OPCODE_BP 0x02 /* Byte program */
#define OPCODE_WRDI 0x04 /* Write disable */
#define OPCODE_AAI_WP 0xad /* Auto address increment word program */
/* Used for Macronix and Winbond flashes. */
#define OPCODE_EN4B 0xb7 /* Enter 4-byte mode */
#define OPCODE_EX4B 0xe9 /* Exit 4-byte mode */
/* Used for Spansion flashes only. */
#define OPCODE_BRWR 0x17 /* Bank register write */
/* Status Register bits. */
#define SR_WIP 1 /* Write in progress */
#define SR_WEL 2 /* Write enable latch */
/* meaning of other SR_* bits may differ between vendors */
#define SR_BP0 4 /* Block protect 0 */
#define SR_BP1 8 /* Block protect 1 */
#define SR_BP2 0x10 /* Block protect 2 */
#define SR_SRWD 0x80 /* SR write protect */
#define SR_QUAD_EN_MX 0x40 /* Macronix Quad I/O */
/* Configuration Register bits. */
#define CR_QUAD_EN_SPAN 0x2 /* Spansion Quad I/O */
/* Define max times to check status register before we give up. */
#define MAX_READY_WAIT_JIFFIES (40 * HZ) /* M25P16 specs 40s max chip erase */
#define MAX_CMD_SIZE 6
#define JEDEC_MFR(_jedec_id) ((_jedec_id) >> 16)
/****************************************************************************/
enum read_type {
M25P80_NORMAL = 0,
M25P80_FAST,
M25P80_DUAL,
M25P80_QUAD,
};
struct m25p {
struct spi_device *spi;
struct mutex lock;
struct spi_nor spi_nor;
struct mtd_info mtd;
u16 page_size;
u16 addr_width;
u8 erase_opcode;
u8 read_opcode;
u8 program_opcode;
u8 *command;
enum read_type flash_read;
u8 command[MAX_CMD_SIZE];
};
static inline struct m25p *mtd_to_m25p(struct mtd_info *mtd)
{
return container_of(mtd, struct m25p, mtd);
}
/****************************************************************************/
/*
* Internal helper functions
*/
/*
* Read the status register, returning its value in the location
* Return the status register value.
* Returns negative if error occurred.
*/
static int read_sr(struct m25p *flash)
{
ssize_t retval;
u8 code = OPCODE_RDSR;
u8 val;
retval = spi_write_then_read(flash->spi, &code, 1, &val, 1);
if (retval < 0) {
dev_err(&flash->spi->dev, "error %d reading SR\n",
(int) retval);
return retval;
}
return val;
}
/*
* Read configuration register, returning its value in the
* location. Return the configuration register value.
* Returns negative if error occured.
*/
static int read_cr(struct m25p *flash)
{
u8 code = OPCODE_RDCR;
int ret;
u8 val;
ret = spi_write_then_read(flash->spi, &code, 1, &val, 1);
if (ret < 0) {
dev_err(&flash->spi->dev, "error %d reading CR\n", ret);
return ret;
}
return val;
}
/*
* Write status register 1 byte
* Returns negative if error occurred.
*/
static int write_sr(struct m25p *flash, u8 val)
{
flash->command[0] = OPCODE_WRSR;
flash->command[1] = val;
return spi_write(flash->spi, flash->command, 2);
}
/*
* Set write enable latch with Write Enable command.
* Returns negative if error occurred.
*/
static inline int write_enable(struct m25p *flash)
{
u8 code = OPCODE_WREN;
return spi_write_then_read(flash->spi, &code, 1, NULL, 0);
}
/*
* Send write disble instruction to the chip.
*/
static inline int write_disable(struct m25p *flash)
{
u8 code = OPCODE_WRDI;
return spi_write_then_read(flash->spi, &code, 1, NULL, 0);
}
/*
* Enable/disable 4-byte addressing mode.
*/
static inline int set_4byte(struct m25p *flash, u32 jedec_id, int enable)
{
int status;
bool need_wren = false;
switch (JEDEC_MFR(jedec_id)) {
case CFI_MFR_ST: /* Micron, actually */
/* Some Micron need WREN command; all will accept it */
need_wren = true;
case CFI_MFR_MACRONIX:
case 0xEF /* winbond */:
if (need_wren)
write_enable(flash);
flash->command[0] = enable ? OPCODE_EN4B : OPCODE_EX4B;
status = spi_write(flash->spi, flash->command, 1);
if (need_wren)
write_disable(flash);
return status;
default:
/* Spansion style */
flash->command[0] = OPCODE_BRWR;
flash->command[1] = enable << 7;
return spi_write(flash->spi, flash->command, 2);
}
}
/*
* Service routine to read status register until ready, or timeout occurs.
* Returns non-zero if error.
*/
static int wait_till_ready(struct m25p *flash)
{
unsigned long deadline;
int sr;
deadline = jiffies + MAX_READY_WAIT_JIFFIES;
do {
if ((sr = read_sr(flash)) < 0)
break;
else if (!(sr & SR_WIP))
return 0;
cond_resched();
} while (!time_after_eq(jiffies, deadline));
return 1;
}
/*
* Write status Register and configuration register with 2 bytes
* The first byte will be written to the status register, while the
* second byte will be written to the configuration register.
* Return negative if error occured.
*/
static int write_sr_cr(struct m25p *flash, u16 val)
{
flash->command[0] = OPCODE_WRSR;
flash->command[1] = val & 0xff;
flash->command[2] = (val >> 8);
return spi_write(flash->spi, flash->command, 3);
}
static int macronix_quad_enable(struct m25p *flash)
{
int ret, val;
u8 cmd[2];
cmd[0] = OPCODE_WRSR;
val = read_sr(flash);
cmd[1] = val | SR_QUAD_EN_MX;
write_enable(flash);
spi_write(flash->spi, &cmd, 2);
if (wait_till_ready(flash))
return 1;
ret = read_sr(flash);
if (!(ret > 0 && (ret & SR_QUAD_EN_MX))) {
dev_err(&flash->spi->dev, "Macronix Quad bit not set\n");
return -EINVAL;
}
return 0;
}
static int spansion_quad_enable(struct m25p *flash)
static int m25p80_read_reg(struct spi_nor *nor, u8 code, u8 *val, int len)
{
struct m25p *flash = nor->priv;
struct spi_device *spi = flash->spi;
int ret;
int quad_en = CR_QUAD_EN_SPAN << 8;
write_enable(flash);
ret = spi_write_then_read(spi, &code, 1, val, len);
if (ret < 0)
dev_err(&spi->dev, "error %d reading %x\n", ret, code);
ret = write_sr_cr(flash, quad_en);
if (ret < 0) {
dev_err(&flash->spi->dev,
"error while writing configuration register\n");
return -EINVAL;
}
/* read back and check it */
ret = read_cr(flash);
if (!(ret > 0 && (ret & CR_QUAD_EN_SPAN))) {
dev_err(&flash->spi->dev, "Spansion Quad bit not set\n");
return -EINVAL;
}
return 0;
}
static int set_quad_mode(struct m25p *flash, u32 jedec_id)
{
int status;
switch (JEDEC_MFR(jedec_id)) {
case CFI_MFR_MACRONIX:
status = macronix_quad_enable(flash);
if (status) {
dev_err(&flash->spi->dev,
"Macronix quad-read not enabled\n");
return -EINVAL;
}
return status;
default:
status = spansion_quad_enable(flash);
if (status) {
dev_err(&flash->spi->dev,
"Spansion quad-read not enabled\n");
return -EINVAL;
}
return status;
}
}
/*
* Erase the whole flash memory
*
* Returns 0 if successful, non-zero otherwise.
*/
static int erase_chip(struct m25p *flash)
{
pr_debug("%s: %s %lldKiB\n", dev_name(&flash->spi->dev), __func__,
(long long)(flash->mtd.size >> 10));
/* Wait until finished previous write command. */
if (wait_till_ready(flash))
return 1;
/* Send write enable, then erase commands. */
write_enable(flash);
/* Set up command buffer. */
flash->command[0] = OPCODE_CHIP_ERASE;
spi_write(flash->spi, flash->command, 1);
return 0;
return ret;
}
static void m25p_addr2cmd(struct m25p *flash, unsigned int addr, u8 *cmd)
static void m25p_addr2cmd(struct spi_nor *nor, unsigned int addr, u8 *cmd)
{
/* opcode is in cmd[0] */
cmd[1] = addr >> (flash->addr_width * 8 - 8);
cmd[2] = addr >> (flash->addr_width * 8 - 16);
cmd[3] = addr >> (flash->addr_width * 8 - 24);
cmd[4] = addr >> (flash->addr_width * 8 - 32);
cmd[1] = addr >> (nor->addr_width * 8 - 8);
cmd[2] = addr >> (nor->addr_width * 8 - 16);
cmd[3] = addr >> (nor->addr_width * 8 - 24);
cmd[4] = addr >> (nor->addr_width * 8 - 32);
}
static int m25p_cmdsz(struct m25p *flash)
static int m25p_cmdsz(struct spi_nor *nor)
{
return 1 + flash->addr_width;
return 1 + nor->addr_width;
}
/*
* Erase one sector of flash memory at offset ``offset'' which is any
* address within the sector which should be erased.
*
* Returns 0 if successful, non-zero otherwise.
*/
static int erase_sector(struct m25p *flash, u32 offset)
static int m25p80_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len,
int wr_en)
{
pr_debug("%s: %s %dKiB at 0x%08x\n", dev_name(&flash->spi->dev),
__func__, flash->mtd.erasesize / 1024, offset);
/* Wait until finished previous write command. */
if (wait_till_ready(flash))
return 1;
/* Send write enable, then erase commands. */
write_enable(flash);
struct m25p *flash = nor->priv;
struct spi_device *spi = flash->spi;
/* Set up command buffer. */
flash->command[0] = flash->erase_opcode;
m25p_addr2cmd(flash, offset, flash->command);
spi_write(flash->spi, flash->command, m25p_cmdsz(flash));
flash->command[0] = opcode;
if (buf)
memcpy(&flash->command[1], buf, len);
return 0;
return spi_write(spi, flash->command, len + 1);
}
/****************************************************************************/
/*
* MTD implementation
*/
/*
* Erase an address range on the flash chip. The address range may extend
* one or more erase sectors. Return an error is there is a problem erasing.
*/
static int m25p80_erase(struct mtd_info *mtd, struct erase_info *instr)
static void m25p80_write(struct spi_nor *nor, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
{
struct m25p *flash = mtd_to_m25p(mtd);
u32 addr,len;
uint32_t rem;
pr_debug("%s: %s at 0x%llx, len %lld\n", dev_name(&flash->spi->dev),
__func__, (long long)instr->addr,
(long long)instr->len);
div_u64_rem(instr->len, mtd->erasesize, &rem);
if (rem)
return -EINVAL;
addr = instr->addr;
len = instr->len;
mutex_lock(&flash->lock);
/* whole-chip erase? */
if (len == flash->mtd.size) {
if (erase_chip(flash)) {
instr->state = MTD_ERASE_FAILED;
mutex_unlock(&flash->lock);
return -EIO;
}
struct m25p *flash = nor->priv;
struct spi_device *spi = flash->spi;
struct spi_transfer t[2] = {};
struct spi_message m;
int cmd_sz = m25p_cmdsz(nor);
/* REVISIT in some cases we could speed up erasing large regions
* by using OPCODE_SE instead of OPCODE_BE_4K. We may have set up
* to use "small sector erase", but that's not always optimal.
*/
spi_message_init(&m);
/* "sector"-at-a-time erase */
} else {
while (len) {
if (erase_sector(flash, addr)) {
instr->state = MTD_ERASE_FAILED;
mutex_unlock(&flash->lock);
return -EIO;
}
if (nor->program_opcode == SPINOR_OP_AAI_WP && nor->sst_write_second)
cmd_sz = 1;
addr += mtd->erasesize;
len -= mtd->erasesize;
}
}
flash->command[0] = nor->program_opcode;
m25p_addr2cmd(nor, to, flash->command);
mutex_unlock(&flash->lock);
t[0].tx_buf = flash->command;
t[0].len = cmd_sz;
spi_message_add_tail(&t[0], &m);
instr->state = MTD_ERASE_DONE;
mtd_erase_callback(instr);
t[1].tx_buf = buf;
t[1].len = len;
spi_message_add_tail(&t[1], &m);
return 0;
}
spi_sync(spi, &m);
/*
* Dummy Cycle calculation for different type of read.
* It can be used to support more commands with
* different dummy cycle requirements.
*/
static inline int m25p80_dummy_cycles_read(struct m25p *flash)
{
switch (flash->flash_read) {
case M25P80_FAST:
case M25P80_DUAL:
case M25P80_QUAD:
return 1;
case M25P80_NORMAL:
return 0;
default:
dev_err(&flash->spi->dev, "No valid read type supported\n");
return -1;
}
*retlen += m.actual_length - cmd_sz;
}
static inline unsigned int m25p80_rx_nbits(const struct m25p *flash)
static inline unsigned int m25p80_rx_nbits(struct spi_nor *nor)
{
switch (flash->flash_read) {
case M25P80_DUAL:
switch (nor->flash_read) {
case SPI_NOR_DUAL:
return 2;
case M25P80_QUAD:
case SPI_NOR_QUAD:
return 4;
default:
return 0;
......@@ -505,590 +118,72 @@ static inline unsigned int m25p80_rx_nbits(const struct m25p *flash)
}
/*
* Read an address range from the flash chip. The address range
* Read an address range from the nor chip. The address range
* may be any size provided it is within the physical boundaries.
*/
static int m25p80_read(struct mtd_info *mtd, loff_t from, size_t len,
static int m25p80_read(struct spi_nor *nor, loff_t from, size_t len,
size_t *retlen, u_char *buf)
{
struct m25p *flash = mtd_to_m25p(mtd);
struct m25p *flash = nor->priv;
struct spi_device *spi = flash->spi;
struct spi_transfer t[2];
struct spi_message m;
uint8_t opcode;
int dummy;
int dummy = nor->read_dummy;
int ret;
pr_debug("%s: %s from 0x%08x, len %zd\n", dev_name(&flash->spi->dev),
__func__, (u32)from, len);
/* Wait till previous write/erase is done. */
ret = nor->wait_till_ready(nor);
if (ret)
return ret;
spi_message_init(&m);
memset(t, 0, (sizeof t));
dummy = m25p80_dummy_cycles_read(flash);
if (dummy < 0) {
dev_err(&flash->spi->dev, "No valid read command supported\n");
return -EINVAL;
}
flash->command[0] = nor->read_opcode;
m25p_addr2cmd(nor, from, flash->command);
t[0].tx_buf = flash->command;
t[0].len = m25p_cmdsz(flash) + dummy;
t[0].len = m25p_cmdsz(nor) + dummy;
spi_message_add_tail(&t[0], &m);
t[1].rx_buf = buf;
t[1].rx_nbits = m25p80_rx_nbits(flash);
t[1].rx_nbits = m25p80_rx_nbits(nor);
t[1].len = len;
spi_message_add_tail(&t[1], &m);
mutex_lock(&flash->lock);
/* Wait till previous write/erase is done. */
if (wait_till_ready(flash)) {
/* REVISIT status return?? */
mutex_unlock(&flash->lock);
return 1;
}
/* Set up the write data buffer. */
opcode = flash->read_opcode;
flash->command[0] = opcode;
m25p_addr2cmd(flash, from, flash->command);
spi_sync(flash->spi, &m);
*retlen = m.actual_length - m25p_cmdsz(flash) - dummy;
mutex_unlock(&flash->lock);
return 0;
}
/*
* Write an address range to the flash chip. Data must be written in
* FLASH_PAGESIZE chunks. The address range may be any size provided
* it is within the physical boundaries.
*/
static int m25p80_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
{
struct m25p *flash = mtd_to_m25p(mtd);
u32 page_offset, page_size;
struct spi_transfer t[2];
struct spi_message m;
pr_debug("%s: %s to 0x%08x, len %zd\n", dev_name(&flash->spi->dev),
__func__, (u32)to, len);
spi_message_init(&m);
memset(t, 0, (sizeof t));
t[0].tx_buf = flash->command;
t[0].len = m25p_cmdsz(flash);
spi_message_add_tail(&t[0], &m);
t[1].tx_buf = buf;
spi_message_add_tail(&t[1], &m);
mutex_lock(&flash->lock);
/* Wait until finished previous write command. */
if (wait_till_ready(flash)) {
mutex_unlock(&flash->lock);
return 1;
}
write_enable(flash);
/* Set up the opcode in the write buffer. */
flash->command[0] = flash->program_opcode;
m25p_addr2cmd(flash, to, flash->command);
page_offset = to & (flash->page_size - 1);
/* do all the bytes fit onto one page? */
if (page_offset + len <= flash->page_size) {
t[1].len = len;
spi_sync(flash->spi, &m);
*retlen = m.actual_length - m25p_cmdsz(flash);
} else {
u32 i;
/* the size of data remaining on the first page */
page_size = flash->page_size - page_offset;
t[1].len = page_size;
spi_sync(flash->spi, &m);
*retlen = m.actual_length - m25p_cmdsz(flash);
/* write everything in flash->page_size chunks */
for (i = page_size; i < len; i += page_size) {
page_size = len - i;
if (page_size > flash->page_size)
page_size = flash->page_size;
/* write the next page to flash */
m25p_addr2cmd(flash, to + i, flash->command);
t[1].tx_buf = buf + i;
t[1].len = page_size;
wait_till_ready(flash);
write_enable(flash);
spi_sync(flash->spi, &m);
*retlen += m.actual_length - m25p_cmdsz(flash);
}
}
mutex_unlock(&flash->lock);
spi_sync(spi, &m);
*retlen = m.actual_length - m25p_cmdsz(nor) - dummy;
return 0;
}
static int sst_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
static int m25p80_erase(struct spi_nor *nor, loff_t offset)
{
struct m25p *flash = mtd_to_m25p(mtd);
struct spi_transfer t[2];
struct spi_message m;
size_t actual;
int cmd_sz, ret;
pr_debug("%s: %s to 0x%08x, len %zd\n", dev_name(&flash->spi->dev),
__func__, (u32)to, len);
spi_message_init(&m);
memset(t, 0, (sizeof t));
t[0].tx_buf = flash->command;
t[0].len = m25p_cmdsz(flash);
spi_message_add_tail(&t[0], &m);
t[1].tx_buf = buf;
spi_message_add_tail(&t[1], &m);
struct m25p *flash = nor->priv;
int ret;
mutex_lock(&flash->lock);
dev_dbg(nor->dev, "%dKiB at 0x%08x\n",
flash->mtd.erasesize / 1024, (u32)offset);
/* Wait until finished previous write command. */
ret = wait_till_ready(flash);
if (ret)
goto time_out;
write_enable(flash);
actual = to % 2;
/* Start write from odd address. */
if (actual) {
flash->command[0] = OPCODE_BP;
m25p_addr2cmd(flash, to, flash->command);
/* write one byte. */
t[1].len = 1;
spi_sync(flash->spi, &m);
ret = wait_till_ready(flash);
if (ret)
goto time_out;
*retlen += m.actual_length - m25p_cmdsz(flash);
}
to += actual;
flash->command[0] = OPCODE_AAI_WP;
m25p_addr2cmd(flash, to, flash->command);
/* Write out most of the data here. */
cmd_sz = m25p_cmdsz(flash);
for (; actual < len - 1; actual += 2) {
t[0].len = cmd_sz;
/* write two bytes. */
t[1].len = 2;
t[1].tx_buf = buf + actual;
spi_sync(flash->spi, &m);
ret = wait_till_ready(flash);
if (ret)
goto time_out;
*retlen += m.actual_length - cmd_sz;
cmd_sz = 1;
to += 2;
}
write_disable(flash);
ret = wait_till_ready(flash);
ret = nor->wait_till_ready(nor);
if (ret)
goto time_out;
/* Write out trailing byte if it exists. */
if (actual != len) {
write_enable(flash);
flash->command[0] = OPCODE_BP;
m25p_addr2cmd(flash, to, flash->command);
t[0].len = m25p_cmdsz(flash);
t[1].len = 1;
t[1].tx_buf = buf + actual;
return ret;
spi_sync(flash->spi, &m);
ret = wait_till_ready(flash);
/* Send write enable, then erase commands. */
ret = nor->write_reg(nor, SPINOR_OP_WREN, NULL, 0, 0);
if (ret)
goto time_out;
*retlen += m.actual_length - m25p_cmdsz(flash);
write_disable(flash);
}
time_out:
mutex_unlock(&flash->lock);
return ret;
}
static int m25p80_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
{
struct m25p *flash = mtd_to_m25p(mtd);
uint32_t offset = ofs;
uint8_t status_old, status_new;
int res = 0;
mutex_lock(&flash->lock);
/* Wait until finished previous command */
if (wait_till_ready(flash)) {
res = 1;
goto err;
}
status_old = read_sr(flash);
if (offset < flash->mtd.size-(flash->mtd.size/2))
status_new = status_old | SR_BP2 | SR_BP1 | SR_BP0;
else if (offset < flash->mtd.size-(flash->mtd.size/4))
status_new = (status_old & ~SR_BP0) | SR_BP2 | SR_BP1;
else if (offset < flash->mtd.size-(flash->mtd.size/8))
status_new = (status_old & ~SR_BP1) | SR_BP2 | SR_BP0;
else if (offset < flash->mtd.size-(flash->mtd.size/16))
status_new = (status_old & ~(SR_BP0|SR_BP1)) | SR_BP2;
else if (offset < flash->mtd.size-(flash->mtd.size/32))
status_new = (status_old & ~SR_BP2) | SR_BP1 | SR_BP0;
else if (offset < flash->mtd.size-(flash->mtd.size/64))
status_new = (status_old & ~(SR_BP2|SR_BP0)) | SR_BP1;
else
status_new = (status_old & ~(SR_BP2|SR_BP1)) | SR_BP0;
/* Only modify protection if it will not unlock other areas */
if ((status_new&(SR_BP2|SR_BP1|SR_BP0)) >
(status_old&(SR_BP2|SR_BP1|SR_BP0))) {
write_enable(flash);
if (write_sr(flash, status_new) < 0) {
res = 1;
goto err;
}
}
err: mutex_unlock(&flash->lock);
return res;
}
static int m25p80_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
{
struct m25p *flash = mtd_to_m25p(mtd);
uint32_t offset = ofs;
uint8_t status_old, status_new;
int res = 0;
mutex_lock(&flash->lock);
/* Wait until finished previous command */
if (wait_till_ready(flash)) {
res = 1;
goto err;
}
status_old = read_sr(flash);
if (offset+len > flash->mtd.size-(flash->mtd.size/64))
status_new = status_old & ~(SR_BP2|SR_BP1|SR_BP0);
else if (offset+len > flash->mtd.size-(flash->mtd.size/32))
status_new = (status_old & ~(SR_BP2|SR_BP1)) | SR_BP0;
else if (offset+len > flash->mtd.size-(flash->mtd.size/16))
status_new = (status_old & ~(SR_BP2|SR_BP0)) | SR_BP1;
else if (offset+len > flash->mtd.size-(flash->mtd.size/8))
status_new = (status_old & ~SR_BP2) | SR_BP1 | SR_BP0;
else if (offset+len > flash->mtd.size-(flash->mtd.size/4))
status_new = (status_old & ~(SR_BP0|SR_BP1)) | SR_BP2;
else if (offset+len > flash->mtd.size-(flash->mtd.size/2))
status_new = (status_old & ~SR_BP1) | SR_BP2 | SR_BP0;
else
status_new = (status_old & ~SR_BP0) | SR_BP2 | SR_BP1;
/* Only modify protection if it will not lock other areas */
if ((status_new&(SR_BP2|SR_BP1|SR_BP0)) <
(status_old&(SR_BP2|SR_BP1|SR_BP0))) {
write_enable(flash);
if (write_sr(flash, status_new) < 0) {
res = 1;
goto err;
}
}
err: mutex_unlock(&flash->lock);
return res;
}
/****************************************************************************/
/*
* SPI device driver setup and teardown
*/
struct flash_info {
/* JEDEC id zero means "no ID" (most older chips); otherwise it has
* a high byte of zero plus three data bytes: the manufacturer id,
* then a two byte device id.
*/
u32 jedec_id;
u16 ext_id;
/* The size listed here is what works with OPCODE_SE, which isn't
* necessarily called a "sector" by the vendor.
*/
unsigned sector_size;
u16 n_sectors;
u16 page_size;
u16 addr_width;
u16 flags;
#define SECT_4K 0x01 /* OPCODE_BE_4K works uniformly */
#define M25P_NO_ERASE 0x02 /* No erase command needed */
#define SST_WRITE 0x04 /* use SST byte programming */
#define M25P_NO_FR 0x08 /* Can't do fastread */
#define SECT_4K_PMC 0x10 /* OPCODE_BE_4K_PMC works uniformly */
#define M25P80_DUAL_READ 0x20 /* Flash supports Dual Read */
#define M25P80_QUAD_READ 0x40 /* Flash supports Quad Read */
};
#define INFO(_jedec_id, _ext_id, _sector_size, _n_sectors, _flags) \
((kernel_ulong_t)&(struct flash_info) { \
.jedec_id = (_jedec_id), \
.ext_id = (_ext_id), \
.sector_size = (_sector_size), \
.n_sectors = (_n_sectors), \
.page_size = 256, \
.flags = (_flags), \
})
#define CAT25_INFO(_sector_size, _n_sectors, _page_size, _addr_width, _flags) \
((kernel_ulong_t)&(struct flash_info) { \
.sector_size = (_sector_size), \
.n_sectors = (_n_sectors), \
.page_size = (_page_size), \
.addr_width = (_addr_width), \
.flags = (_flags), \
})
/* NOTE: double check command sets and memory organization when you add
* more flash chips. This current list focusses on newer chips, which
* have been converging on command sets which including JEDEC ID.
*/
static const struct spi_device_id m25p_ids[] = {
/* Atmel -- some are (confusingly) marketed as "DataFlash" */
{ "at25fs010", INFO(0x1f6601, 0, 32 * 1024, 4, SECT_4K) },
{ "at25fs040", INFO(0x1f6604, 0, 64 * 1024, 8, SECT_4K) },
{ "at25df041a", INFO(0x1f4401, 0, 64 * 1024, 8, SECT_4K) },
{ "at25df321a", INFO(0x1f4701, 0, 64 * 1024, 64, SECT_4K) },
{ "at25df641", INFO(0x1f4800, 0, 64 * 1024, 128, SECT_4K) },
{ "at26f004", INFO(0x1f0400, 0, 64 * 1024, 8, SECT_4K) },
{ "at26df081a", INFO(0x1f4501, 0, 64 * 1024, 16, SECT_4K) },
{ "at26df161a", INFO(0x1f4601, 0, 64 * 1024, 32, SECT_4K) },
{ "at26df321", INFO(0x1f4700, 0, 64 * 1024, 64, SECT_4K) },
{ "at45db081d", INFO(0x1f2500, 0, 64 * 1024, 16, SECT_4K) },
/* EON -- en25xxx */
{ "en25f32", INFO(0x1c3116, 0, 64 * 1024, 64, SECT_4K) },
{ "en25p32", INFO(0x1c2016, 0, 64 * 1024, 64, 0) },
{ "en25q32b", INFO(0x1c3016, 0, 64 * 1024, 64, 0) },
{ "en25p64", INFO(0x1c2017, 0, 64 * 1024, 128, 0) },
{ "en25q64", INFO(0x1c3017, 0, 64 * 1024, 128, SECT_4K) },
{ "en25qh256", INFO(0x1c7019, 0, 64 * 1024, 512, 0) },
/* ESMT */
{ "f25l32pa", INFO(0x8c2016, 0, 64 * 1024, 64, SECT_4K) },
/* Everspin */
{ "mr25h256", CAT25_INFO( 32 * 1024, 1, 256, 2, M25P_NO_ERASE | M25P_NO_FR) },
{ "mr25h10", CAT25_INFO(128 * 1024, 1, 256, 3, M25P_NO_ERASE | M25P_NO_FR) },
/* GigaDevice */
{ "gd25q32", INFO(0xc84016, 0, 64 * 1024, 64, SECT_4K) },
{ "gd25q64", INFO(0xc84017, 0, 64 * 1024, 128, SECT_4K) },
/* Intel/Numonyx -- xxxs33b */
{ "160s33b", INFO(0x898911, 0, 64 * 1024, 32, 0) },
{ "320s33b", INFO(0x898912, 0, 64 * 1024, 64, 0) },
{ "640s33b", INFO(0x898913, 0, 64 * 1024, 128, 0) },
/* Macronix */
{ "mx25l2005a", INFO(0xc22012, 0, 64 * 1024, 4, SECT_4K) },
{ "mx25l4005a", INFO(0xc22013, 0, 64 * 1024, 8, SECT_4K) },
{ "mx25l8005", INFO(0xc22014, 0, 64 * 1024, 16, 0) },
{ "mx25l1606e", INFO(0xc22015, 0, 64 * 1024, 32, SECT_4K) },
{ "mx25l3205d", INFO(0xc22016, 0, 64 * 1024, 64, 0) },
{ "mx25l3255e", INFO(0xc29e16, 0, 64 * 1024, 64, SECT_4K) },
{ "mx25l6405d", INFO(0xc22017, 0, 64 * 1024, 128, 0) },
{ "mx25l12805d", INFO(0xc22018, 0, 64 * 1024, 256, 0) },
{ "mx25l12855e", INFO(0xc22618, 0, 64 * 1024, 256, 0) },
{ "mx25l25635e", INFO(0xc22019, 0, 64 * 1024, 512, 0) },
{ "mx25l25655e", INFO(0xc22619, 0, 64 * 1024, 512, 0) },
{ "mx66l51235l", INFO(0xc2201a, 0, 64 * 1024, 1024, M25P80_QUAD_READ) },
{ "mx66l1g55g", INFO(0xc2261b, 0, 64 * 1024, 2048, M25P80_QUAD_READ) },
/* Micron */
{ "n25q064", INFO(0x20ba17, 0, 64 * 1024, 128, 0) },
{ "n25q128a11", INFO(0x20bb18, 0, 64 * 1024, 256, 0) },
{ "n25q128a13", INFO(0x20ba18, 0, 64 * 1024, 256, 0) },
{ "n25q256a", INFO(0x20ba19, 0, 64 * 1024, 512, SECT_4K) },
{ "n25q512a", INFO(0x20bb20, 0, 64 * 1024, 1024, SECT_4K) },
/* PMC */
{ "pm25lv512", INFO(0, 0, 32 * 1024, 2, SECT_4K_PMC) },
{ "pm25lv010", INFO(0, 0, 32 * 1024, 4, SECT_4K_PMC) },
{ "pm25lq032", INFO(0x7f9d46, 0, 64 * 1024, 64, SECT_4K) },
/* Spansion -- single (large) sector size only, at least
* for the chips listed here (without boot sectors).
*/
{ "s25sl032p", INFO(0x010215, 0x4d00, 64 * 1024, 64, 0) },
{ "s25sl064p", INFO(0x010216, 0x4d00, 64 * 1024, 128, 0) },
{ "s25fl256s0", INFO(0x010219, 0x4d00, 256 * 1024, 128, 0) },
{ "s25fl256s1", INFO(0x010219, 0x4d01, 64 * 1024, 512, M25P80_DUAL_READ | M25P80_QUAD_READ) },
{ "s25fl512s", INFO(0x010220, 0x4d00, 256 * 1024, 256, M25P80_DUAL_READ | M25P80_QUAD_READ) },
{ "s70fl01gs", INFO(0x010221, 0x4d00, 256 * 1024, 256, 0) },
{ "s25sl12800", INFO(0x012018, 0x0300, 256 * 1024, 64, 0) },
{ "s25sl12801", INFO(0x012018, 0x0301, 64 * 1024, 256, 0) },
{ "s25fl129p0", INFO(0x012018, 0x4d00, 256 * 1024, 64, 0) },
{ "s25fl129p1", INFO(0x012018, 0x4d01, 64 * 1024, 256, 0) },
{ "s25sl004a", INFO(0x010212, 0, 64 * 1024, 8, 0) },
{ "s25sl008a", INFO(0x010213, 0, 64 * 1024, 16, 0) },
{ "s25sl016a", INFO(0x010214, 0, 64 * 1024, 32, 0) },
{ "s25sl032a", INFO(0x010215, 0, 64 * 1024, 64, 0) },
{ "s25sl064a", INFO(0x010216, 0, 64 * 1024, 128, 0) },
{ "s25fl008k", INFO(0xef4014, 0, 64 * 1024, 16, SECT_4K) },
{ "s25fl016k", INFO(0xef4015, 0, 64 * 1024, 32, SECT_4K) },
{ "s25fl064k", INFO(0xef4017, 0, 64 * 1024, 128, SECT_4K) },
/* SST -- large erase sizes are "overlays", "sectors" are 4K */
{ "sst25vf040b", INFO(0xbf258d, 0, 64 * 1024, 8, SECT_4K | SST_WRITE) },
{ "sst25vf080b", INFO(0xbf258e, 0, 64 * 1024, 16, SECT_4K | SST_WRITE) },
{ "sst25vf016b", INFO(0xbf2541, 0, 64 * 1024, 32, SECT_4K | SST_WRITE) },
{ "sst25vf032b", INFO(0xbf254a, 0, 64 * 1024, 64, SECT_4K | SST_WRITE) },
{ "sst25vf064c", INFO(0xbf254b, 0, 64 * 1024, 128, SECT_4K) },
{ "sst25wf512", INFO(0xbf2501, 0, 64 * 1024, 1, SECT_4K | SST_WRITE) },
{ "sst25wf010", INFO(0xbf2502, 0, 64 * 1024, 2, SECT_4K | SST_WRITE) },
{ "sst25wf020", INFO(0xbf2503, 0, 64 * 1024, 4, SECT_4K | SST_WRITE) },
{ "sst25wf040", INFO(0xbf2504, 0, 64 * 1024, 8, SECT_4K | SST_WRITE) },
/* ST Microelectronics -- newer production may have feature updates */
{ "m25p05", INFO(0x202010, 0, 32 * 1024, 2, 0) },
{ "m25p10", INFO(0x202011, 0, 32 * 1024, 4, 0) },
{ "m25p20", INFO(0x202012, 0, 64 * 1024, 4, 0) },
{ "m25p40", INFO(0x202013, 0, 64 * 1024, 8, 0) },
{ "m25p80", INFO(0x202014, 0, 64 * 1024, 16, 0) },
{ "m25p16", INFO(0x202015, 0, 64 * 1024, 32, 0) },
{ "m25p32", INFO(0x202016, 0, 64 * 1024, 64, 0) },
{ "m25p64", INFO(0x202017, 0, 64 * 1024, 128, 0) },
{ "m25p128", INFO(0x202018, 0, 256 * 1024, 64, 0) },
{ "n25q032", INFO(0x20ba16, 0, 64 * 1024, 64, 0) },
{ "m25p05-nonjedec", INFO(0, 0, 32 * 1024, 2, 0) },
{ "m25p10-nonjedec", INFO(0, 0, 32 * 1024, 4, 0) },
{ "m25p20-nonjedec", INFO(0, 0, 64 * 1024, 4, 0) },
{ "m25p40-nonjedec", INFO(0, 0, 64 * 1024, 8, 0) },
{ "m25p80-nonjedec", INFO(0, 0, 64 * 1024, 16, 0) },
{ "m25p16-nonjedec", INFO(0, 0, 64 * 1024, 32, 0) },
{ "m25p32-nonjedec", INFO(0, 0, 64 * 1024, 64, 0) },
{ "m25p64-nonjedec", INFO(0, 0, 64 * 1024, 128, 0) },
{ "m25p128-nonjedec", INFO(0, 0, 256 * 1024, 64, 0) },
{ "m45pe10", INFO(0x204011, 0, 64 * 1024, 2, 0) },
{ "m45pe80", INFO(0x204014, 0, 64 * 1024, 16, 0) },
{ "m45pe16", INFO(0x204015, 0, 64 * 1024, 32, 0) },
{ "m25pe20", INFO(0x208012, 0, 64 * 1024, 4, 0) },
{ "m25pe80", INFO(0x208014, 0, 64 * 1024, 16, 0) },
{ "m25pe16", INFO(0x208015, 0, 64 * 1024, 32, SECT_4K) },
{ "m25px16", INFO(0x207115, 0, 64 * 1024, 32, SECT_4K) },
{ "m25px32", INFO(0x207116, 0, 64 * 1024, 64, SECT_4K) },
{ "m25px32-s0", INFO(0x207316, 0, 64 * 1024, 64, SECT_4K) },
{ "m25px32-s1", INFO(0x206316, 0, 64 * 1024, 64, SECT_4K) },
{ "m25px64", INFO(0x207117, 0, 64 * 1024, 128, 0) },
/* Winbond -- w25x "blocks" are 64K, "sectors" are 4KiB */
{ "w25x10", INFO(0xef3011, 0, 64 * 1024, 2, SECT_4K) },
{ "w25x20", INFO(0xef3012, 0, 64 * 1024, 4, SECT_4K) },
{ "w25x40", INFO(0xef3013, 0, 64 * 1024, 8, SECT_4K) },
{ "w25x80", INFO(0xef3014, 0, 64 * 1024, 16, SECT_4K) },
{ "w25x16", INFO(0xef3015, 0, 64 * 1024, 32, SECT_4K) },
{ "w25x32", INFO(0xef3016, 0, 64 * 1024, 64, SECT_4K) },
{ "w25q32", INFO(0xef4016, 0, 64 * 1024, 64, SECT_4K) },
{ "w25q32dw", INFO(0xef6016, 0, 64 * 1024, 64, SECT_4K) },
{ "w25x64", INFO(0xef3017, 0, 64 * 1024, 128, SECT_4K) },
{ "w25q64", INFO(0xef4017, 0, 64 * 1024, 128, SECT_4K) },
{ "w25q128", INFO(0xef4018, 0, 64 * 1024, 256, SECT_4K) },
{ "w25q80", INFO(0xef5014, 0, 64 * 1024, 16, SECT_4K) },
{ "w25q80bl", INFO(0xef4014, 0, 64 * 1024, 16, SECT_4K) },
{ "w25q128", INFO(0xef4018, 0, 64 * 1024, 256, SECT_4K) },
{ "w25q256", INFO(0xef4019, 0, 64 * 1024, 512, SECT_4K) },
/* Catalyst / On Semiconductor -- non-JEDEC */
{ "cat25c11", CAT25_INFO( 16, 8, 16, 1, M25P_NO_ERASE | M25P_NO_FR) },
{ "cat25c03", CAT25_INFO( 32, 8, 16, 2, M25P_NO_ERASE | M25P_NO_FR) },
{ "cat25c09", CAT25_INFO( 128, 8, 32, 2, M25P_NO_ERASE | M25P_NO_FR) },
{ "cat25c17", CAT25_INFO( 256, 8, 32, 2, M25P_NO_ERASE | M25P_NO_FR) },
{ "cat25128", CAT25_INFO(2048, 8, 64, 2, M25P_NO_ERASE | M25P_NO_FR) },
{ },
};
MODULE_DEVICE_TABLE(spi, m25p_ids);
static const struct spi_device_id *jedec_probe(struct spi_device *spi)
{
int tmp;
u8 code = OPCODE_RDID;
u8 id[5];
u32 jedec;
u16 ext_jedec;
struct flash_info *info;
/* JEDEC also defines an optional "extended device information"
* string for after vendor-specific data, after the three bytes
* we use here. Supporting some chips might require using it.
*/
tmp = spi_write_then_read(spi, &code, 1, id, 5);
if (tmp < 0) {
pr_debug("%s: error %d reading JEDEC ID\n",
dev_name(&spi->dev), tmp);
return ERR_PTR(tmp);
}
jedec = id[0];
jedec = jedec << 8;
jedec |= id[1];
jedec = jedec << 8;
jedec |= id[2];
/* Set up command buffer. */
flash->command[0] = nor->erase_opcode;
m25p_addr2cmd(nor, offset, flash->command);
ext_jedec = id[3] << 8 | id[4];
spi_write(flash->spi, flash->command, m25p_cmdsz(nor));
for (tmp = 0; tmp < ARRAY_SIZE(m25p_ids) - 1; tmp++) {
info = (void *)m25p_ids[tmp].driver_data;
if (info->jedec_id == jedec) {
if (info->ext_id == 0 || info->ext_id == ext_jedec)
return &m25p_ids[tmp];
}
}
dev_err(&spi->dev, "unrecognized JEDEC id %06x\n", jedec);
return ERR_PTR(-ENODEV);
return 0;
}
/*
* board specific setup should have ensured the SPI clock used here
* matches what the READ command supports, at least until this driver
......@@ -1096,231 +191,45 @@ static const struct spi_device_id *jedec_probe(struct spi_device *spi)
*/
static int m25p_probe(struct spi_device *spi)
{
const struct spi_device_id *id = spi_get_device_id(spi);
struct mtd_part_parser_data ppdata;
struct flash_platform_data *data;
struct m25p *flash;
struct flash_info *info;
unsigned i;
struct mtd_part_parser_data ppdata;
struct device_node *np = spi->dev.of_node;
struct spi_nor *nor;
enum read_mode mode = SPI_NOR_NORMAL;
int ret;
/* Platform data helps sort out which chip type we have, as
* well as how this board partitions it. If we don't have
* a chip ID, try the JEDEC id commands; they'll work for most
* newer chips, even if we don't recognize the particular chip.
*/
data = dev_get_platdata(&spi->dev);
if (data && data->type) {
const struct spi_device_id *plat_id;
for (i = 0; i < ARRAY_SIZE(m25p_ids) - 1; i++) {
plat_id = &m25p_ids[i];
if (strcmp(data->type, plat_id->name))
continue;
break;
}
if (i < ARRAY_SIZE(m25p_ids) - 1)
id = plat_id;
else
dev_warn(&spi->dev, "unrecognized id %s\n", data->type);
}
info = (void *)id->driver_data;
if (info->jedec_id) {
const struct spi_device_id *jid;
jid = jedec_probe(spi);
if (IS_ERR(jid)) {
return PTR_ERR(jid);
} else if (jid != id) {
/*
* JEDEC knows better, so overwrite platform ID. We
* can't trust partitions any longer, but we'll let
* mtd apply them anyway, since some partitions may be
* marked read-only, and we don't want to lose that
* information, even if it's not 100% accurate.
*/
dev_warn(&spi->dev, "found %s, expected %s\n",
jid->name, id->name);
id = jid;
info = (void *)jid->driver_data;
}
}
flash = devm_kzalloc(&spi->dev, sizeof(*flash), GFP_KERNEL);
if (!flash)
return -ENOMEM;
flash->command = devm_kzalloc(&spi->dev, MAX_CMD_SIZE, GFP_KERNEL);
if (!flash->command)
return -ENOMEM;
flash->spi = spi;
mutex_init(&flash->lock);
spi_set_drvdata(spi, flash);
/*
* Atmel, SST and Intel/Numonyx serial flash tend to power
* up with the software protection bits set
*/
if (JEDEC_MFR(info->jedec_id) == CFI_MFR_ATMEL ||
JEDEC_MFR(info->jedec_id) == CFI_MFR_INTEL ||
JEDEC_MFR(info->jedec_id) == CFI_MFR_SST) {
write_enable(flash);
write_sr(flash, 0);
}
if (data && data->name)
flash->mtd.name = data->name;
else
flash->mtd.name = dev_name(&spi->dev);
flash->mtd.type = MTD_NORFLASH;
flash->mtd.writesize = 1;
flash->mtd.flags = MTD_CAP_NORFLASH;
flash->mtd.size = info->sector_size * info->n_sectors;
flash->mtd._erase = m25p80_erase;
flash->mtd._read = m25p80_read;
nor = &flash->spi_nor;
/* flash protection support for STmicro chips */
if (JEDEC_MFR(info->jedec_id) == CFI_MFR_ST) {
flash->mtd._lock = m25p80_lock;
flash->mtd._unlock = m25p80_unlock;
}
/* sst flash chips use AAI word program */
if (info->flags & SST_WRITE)
flash->mtd._write = sst_write;
else
flash->mtd._write = m25p80_write;
/* install the hooks */
nor->read = m25p80_read;
nor->write = m25p80_write;
nor->erase = m25p80_erase;
nor->write_reg = m25p80_write_reg;
nor->read_reg = m25p80_read_reg;
/* prefer "small sector" erase if possible */
if (info->flags & SECT_4K) {
flash->erase_opcode = OPCODE_BE_4K;
flash->mtd.erasesize = 4096;
} else if (info->flags & SECT_4K_PMC) {
flash->erase_opcode = OPCODE_BE_4K_PMC;
flash->mtd.erasesize = 4096;
} else {
flash->erase_opcode = OPCODE_SE;
flash->mtd.erasesize = info->sector_size;
}
nor->dev = &spi->dev;
nor->mtd = &flash->mtd;
nor->priv = flash;
if (info->flags & M25P_NO_ERASE)
flash->mtd.flags |= MTD_NO_ERASE;
ppdata.of_node = spi->dev.of_node;
flash->mtd.dev.parent = &spi->dev;
flash->page_size = info->page_size;
flash->mtd.writebufsize = flash->page_size;
if (np) {
/* If we were instantiated by DT, use it */
if (of_property_read_bool(np, "m25p,fast-read"))
flash->flash_read = M25P80_FAST;
else
flash->flash_read = M25P80_NORMAL;
} else {
/* If we weren't instantiated by DT, default to fast-read */
flash->flash_read = M25P80_FAST;
}
/* Some devices cannot do fast-read, no matter what DT tells us */
if (info->flags & M25P_NO_FR)
flash->flash_read = M25P80_NORMAL;
spi_set_drvdata(spi, flash);
flash->mtd.priv = nor;
flash->spi = spi;
/* Quad/Dual-read mode takes precedence over fast/normal */
if (spi->mode & SPI_RX_QUAD && info->flags & M25P80_QUAD_READ) {
ret = set_quad_mode(flash, info->jedec_id);
if (ret) {
dev_err(&flash->spi->dev, "quad mode not supported\n");
if (spi->mode & SPI_RX_QUAD)
mode = SPI_NOR_QUAD;
else if (spi->mode & SPI_RX_DUAL)
mode = SPI_NOR_DUAL;
ret = spi_nor_scan(nor, spi_get_device_id(spi), mode);
if (ret)
return ret;
}
flash->flash_read = M25P80_QUAD;
} else if (spi->mode & SPI_RX_DUAL && info->flags & M25P80_DUAL_READ) {
flash->flash_read = M25P80_DUAL;
}
/* Default commands */
switch (flash->flash_read) {
case M25P80_QUAD:
flash->read_opcode = OPCODE_QUAD_READ;
break;
case M25P80_DUAL:
flash->read_opcode = OPCODE_DUAL_READ;
break;
case M25P80_FAST:
flash->read_opcode = OPCODE_FAST_READ;
break;
case M25P80_NORMAL:
flash->read_opcode = OPCODE_NORM_READ;
break;
default:
dev_err(&flash->spi->dev, "No Read opcode defined\n");
return -EINVAL;
}
flash->program_opcode = OPCODE_PP;
if (info->addr_width)
flash->addr_width = info->addr_width;
else if (flash->mtd.size > 0x1000000) {
/* enable 4-byte addressing if the device exceeds 16MiB */
flash->addr_width = 4;
if (JEDEC_MFR(info->jedec_id) == CFI_MFR_AMD) {
/* Dedicated 4-byte command set */
switch (flash->flash_read) {
case M25P80_QUAD:
flash->read_opcode = OPCODE_QUAD_READ_4B;
break;
case M25P80_DUAL:
flash->read_opcode = OPCODE_DUAL_READ_4B;
break;
case M25P80_FAST:
flash->read_opcode = OPCODE_FAST_READ_4B;
break;
case M25P80_NORMAL:
flash->read_opcode = OPCODE_NORM_READ_4B;
break;
}
flash->program_opcode = OPCODE_PP_4B;
/* No small sector erase for 4-byte command set */
flash->erase_opcode = OPCODE_SE_4B;
flash->mtd.erasesize = info->sector_size;
} else
set_4byte(flash, info->jedec_id, 1);
} else {
flash->addr_width = 3;
}
dev_info(&spi->dev, "%s (%lld Kbytes)\n", id->name,
(long long)flash->mtd.size >> 10);
pr_debug("mtd .name = %s, .size = 0x%llx (%lldMiB) "
".erasesize = 0x%.8x (%uKiB) .numeraseregions = %d\n",
flash->mtd.name,
(long long)flash->mtd.size, (long long)(flash->mtd.size >> 20),
flash->mtd.erasesize, flash->mtd.erasesize / 1024,
flash->mtd.numeraseregions);
if (flash->mtd.numeraseregions)
for (i = 0; i < flash->mtd.numeraseregions; i++)
pr_debug("mtd.eraseregions[%d] = { .offset = 0x%llx, "
".erasesize = 0x%.8x (%uKiB), "
".numblocks = %d }\n",
i, (long long)flash->mtd.eraseregions[i].offset,
flash->mtd.eraseregions[i].erasesize,
flash->mtd.eraseregions[i].erasesize / 1024,
flash->mtd.eraseregions[i].numblocks);
data = dev_get_platdata(&spi->dev);
ppdata.of_node = spi->dev.of_node;
/* partitions should match sector boundaries; and it may be good to
* use readonly partitions for writeprotected sectors (BP2..BP0).
*/
return mtd_device_parse_register(&flash->mtd, NULL, &ppdata,
data ? data->parts : NULL,
data ? data->nr_parts : 0);
......@@ -1341,7 +250,7 @@ static struct spi_driver m25p80_driver = {
.name = "m25p80",
.owner = THIS_MODULE,
},
.id_table = m25p_ids,
.id_table = spi_nor_ids,
.probe = m25p_probe,
.remove = m25p_remove,
......
......@@ -13,43 +13,23 @@
#define _MTD_SERIAL_FLASH_CMDS_H
/* Generic Flash Commands/OPCODEs */
#define FLASH_CMD_WREN 0x06
#define FLASH_CMD_WRDI 0x04
#define FLASH_CMD_RDID 0x9f
#define FLASH_CMD_RDSR 0x05
#define FLASH_CMD_RDSR2 0x35
#define FLASH_CMD_WRSR 0x01
#define FLASH_CMD_SE_4K 0x20
#define FLASH_CMD_SE_32K 0x52
#define FLASH_CMD_SE 0xd8
#define FLASH_CMD_CHIPERASE 0xc7
#define FLASH_CMD_WRVCR 0x81
#define FLASH_CMD_RDVCR 0x85
#define SPINOR_OP_RDSR2 0x35
#define SPINOR_OP_WRVCR 0x81
#define SPINOR_OP_RDVCR 0x85
/* JEDEC Standard - Serial Flash Discoverable Parmeters (SFDP) Commands */
#define FLASH_CMD_READ 0x03 /* READ */
#define FLASH_CMD_READ_FAST 0x0b /* FAST READ */
#define FLASH_CMD_READ_1_1_2 0x3b /* DUAL OUTPUT READ */
#define FLASH_CMD_READ_1_2_2 0xbb /* DUAL I/O READ */
#define FLASH_CMD_READ_1_1_4 0x6b /* QUAD OUTPUT READ */
#define FLASH_CMD_READ_1_4_4 0xeb /* QUAD I/O READ */
#define SPINOR_OP_READ_1_2_2 0xbb /* DUAL I/O READ */
#define SPINOR_OP_READ_1_4_4 0xeb /* QUAD I/O READ */
#define FLASH_CMD_WRITE 0x02 /* PAGE PROGRAM */
#define FLASH_CMD_WRITE_1_1_2 0xa2 /* DUAL INPUT PROGRAM */
#define FLASH_CMD_WRITE_1_2_2 0xd2 /* DUAL INPUT EXT PROGRAM */
#define FLASH_CMD_WRITE_1_1_4 0x32 /* QUAD INPUT PROGRAM */
#define FLASH_CMD_WRITE_1_4_4 0x12 /* QUAD INPUT EXT PROGRAM */
#define FLASH_CMD_EN4B_ADDR 0xb7 /* Enter 4-byte address mode */
#define FLASH_CMD_EX4B_ADDR 0xe9 /* Exit 4-byte address mode */
#define SPINOR_OP_WRITE 0x02 /* PAGE PROGRAM */
#define SPINOR_OP_WRITE_1_1_2 0xa2 /* DUAL INPUT PROGRAM */
#define SPINOR_OP_WRITE_1_2_2 0xd2 /* DUAL INPUT EXT PROGRAM */
#define SPINOR_OP_WRITE_1_1_4 0x32 /* QUAD INPUT PROGRAM */
#define SPINOR_OP_WRITE_1_4_4 0x12 /* QUAD INPUT EXT PROGRAM */
/* READ commands with 32-bit addressing */
#define FLASH_CMD_READ4 0x13
#define FLASH_CMD_READ4_FAST 0x0c
#define FLASH_CMD_READ4_1_1_2 0x3c
#define FLASH_CMD_READ4_1_2_2 0xbc
#define FLASH_CMD_READ4_1_1_4 0x6c
#define FLASH_CMD_READ4_1_4_4 0xec
#define SPINOR_OP_READ4_1_2_2 0xbc
#define SPINOR_OP_READ4_1_4_4 0xec
/* Configuration flags */
#define FLASH_FLAG_SINGLE 0x000000ff
......
......@@ -280,14 +280,11 @@ __setup("slram=", mtd_slram_setup);
static int __init init_slram(void)
{
char *devname;
int i;
#ifndef MODULE
char *devstart;
char *devlength;
i = 0;
if (!map) {
E("slram: not enough parameters.\n");
return(-EINVAL);
......@@ -314,6 +311,7 @@ static int __init init_slram(void)
}
#else
int count;
int i;
for (count = 0; count < SLRAM_MAX_DEVICES_PARAMS && map[count];
count++) {
......
......@@ -19,6 +19,7 @@
#include <linux/mfd/syscon.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
#include <linux/mtd/spi-nor.h>
#include <linux/sched.h>
#include <linux/delay.h>
#include <linux/io.h>
......@@ -201,44 +202,6 @@
#define STFSM_MAX_WAIT_SEQ_MS 1000 /* FSM execution time */
/* Flash Commands */
#define FLASH_CMD_WREN 0x06
#define FLASH_CMD_WRDI 0x04
#define FLASH_CMD_RDID 0x9f
#define FLASH_CMD_RDSR 0x05
#define FLASH_CMD_RDSR2 0x35
#define FLASH_CMD_WRSR 0x01
#define FLASH_CMD_SE_4K 0x20
#define FLASH_CMD_SE_32K 0x52
#define FLASH_CMD_SE 0xd8
#define FLASH_CMD_CHIPERASE 0xc7
#define FLASH_CMD_WRVCR 0x81
#define FLASH_CMD_RDVCR 0x85
#define FLASH_CMD_READ 0x03 /* READ */
#define FLASH_CMD_READ_FAST 0x0b /* FAST READ */
#define FLASH_CMD_READ_1_1_2 0x3b /* DUAL OUTPUT READ */
#define FLASH_CMD_READ_1_2_2 0xbb /* DUAL I/O READ */
#define FLASH_CMD_READ_1_1_4 0x6b /* QUAD OUTPUT READ */
#define FLASH_CMD_READ_1_4_4 0xeb /* QUAD I/O READ */
#define FLASH_CMD_WRITE 0x02 /* PAGE PROGRAM */
#define FLASH_CMD_WRITE_1_1_2 0xa2 /* DUAL INPUT PROGRAM */
#define FLASH_CMD_WRITE_1_2_2 0xd2 /* DUAL INPUT EXT PROGRAM */
#define FLASH_CMD_WRITE_1_1_4 0x32 /* QUAD INPUT PROGRAM */
#define FLASH_CMD_WRITE_1_4_4 0x12 /* QUAD INPUT EXT PROGRAM */
#define FLASH_CMD_EN4B_ADDR 0xb7 /* Enter 4-byte address mode */
#define FLASH_CMD_EX4B_ADDR 0xe9 /* Exit 4-byte address mode */
/* READ commands with 32-bit addressing (N25Q256 and S25FLxxxS) */
#define FLASH_CMD_READ4 0x13
#define FLASH_CMD_READ4_FAST 0x0c
#define FLASH_CMD_READ4_1_1_2 0x3c
#define FLASH_CMD_READ4_1_2_2 0xbc
#define FLASH_CMD_READ4_1_1_4 0x6c
#define FLASH_CMD_READ4_1_4_4 0xec
/* S25FLxxxS commands */
#define S25FL_CMD_WRITE4_1_1_4 0x34
#define S25FL_CMD_SE4 0xdc
......@@ -246,7 +209,7 @@
#define S25FL_CMD_DYBWR 0xe1
#define S25FL_CMD_DYBRD 0xe0
#define S25FL_CMD_WRITE4 0x12 /* Note, opcode clashes with
* 'FLASH_CMD_WRITE_1_4_4'
* 'SPINOR_OP_WRITE_1_4_4'
* as found on N25Qxxx devices! */
/* Status register */
......@@ -261,6 +224,12 @@
#define S25FL_STATUS_E_ERR 0x20
#define S25FL_STATUS_P_ERR 0x40
#define N25Q_CMD_WRVCR 0x81
#define N25Q_CMD_RDVCR 0x85
#define N25Q_CMD_RDVECR 0x65
#define N25Q_CMD_RDNVCR 0xb5
#define N25Q_CMD_WRNVCR 0xb1
#define FLASH_PAGESIZE 256 /* In Bytes */
#define FLASH_PAGESIZE_32 (FLASH_PAGESIZE / 4) /* In uint32_t */
#define FLASH_MAX_BUSY_WAIT (300 * HZ) /* Maximum 'CHIPERASE' time */
......@@ -270,7 +239,6 @@
*/
#define CFG_READ_TOGGLE_32BIT_ADDR 0x00000001
#define CFG_WRITE_TOGGLE_32BIT_ADDR 0x00000002
#define CFG_WRITE_EX_32BIT_ADDR_DELAY 0x00000004
#define CFG_ERASESEC_TOGGLE_32BIT_ADDR 0x00000008
#define CFG_S25FL_CHECK_ERROR_FLAGS 0x00000010
......@@ -329,7 +297,7 @@ struct flash_info {
u32 jedec_id;
u16 ext_id;
/*
* The size listed here is what works with FLASH_CMD_SE, which isn't
* The size listed here is what works with SPINOR_OP_SE, which isn't
* necessarily called a "sector" by the vendor.
*/
unsigned sector_size;
......@@ -369,17 +337,26 @@ static struct flash_info flash_types[] = {
{ "m25px32", 0x207116, 0, 64 * 1024, 64, M25PX_FLAG, 75, NULL },
{ "m25px64", 0x207117, 0, 64 * 1024, 128, M25PX_FLAG, 75, NULL },
/* Macronix MX25xxx
* - Support for 'FLASH_FLAG_WRITE_1_4_4' is omitted for devices
* where operating frequency must be reduced.
*/
#define MX25_FLAG (FLASH_FLAG_READ_WRITE | \
FLASH_FLAG_READ_FAST | \
FLASH_FLAG_READ_1_1_2 | \
FLASH_FLAG_READ_1_2_2 | \
FLASH_FLAG_READ_1_1_4 | \
FLASH_FLAG_READ_1_4_4 | \
FLASH_FLAG_SE_4K | \
FLASH_FLAG_SE_32K)
{ "mx25l3255e", 0xc29e16, 0, 64 * 1024, 64,
(MX25_FLAG | FLASH_FLAG_WRITE_1_4_4), 86,
stfsm_mx25_config},
{ "mx25l25635e", 0xc22019, 0, 64*1024, 512,
(MX25_FLAG | FLASH_FLAG_32BIT_ADDR | FLASH_FLAG_RESET), 70,
stfsm_mx25_config },
{ "mx25l25655e", 0xc22619, 0, 64*1024, 512,
(MX25_FLAG | FLASH_FLAG_32BIT_ADDR | FLASH_FLAG_RESET), 70,
stfsm_mx25_config},
#define N25Q_FLAG (FLASH_FLAG_READ_WRITE | \
FLASH_FLAG_READ_FAST | \
......@@ -407,6 +384,8 @@ static struct flash_info flash_types[] = {
FLASH_FLAG_READ_1_4_4 | \
FLASH_FLAG_WRITE_1_1_4 | \
FLASH_FLAG_READ_FAST)
{ "s25fl032p", 0x010215, 0x4d00, 64 * 1024, 64, S25FLXXXP_FLAG, 80,
stfsm_s25fl_config},
{ "s25fl129p0", 0x012018, 0x4d00, 256 * 1024, 64, S25FLXXXP_FLAG, 80,
stfsm_s25fl_config },
{ "s25fl129p1", 0x012018, 0x4d01, 64 * 1024, 256, S25FLXXXP_FLAG, 80,
......@@ -473,22 +452,22 @@ static struct flash_info flash_types[] = {
/* Default READ configurations, in order of preference */
static struct seq_rw_config default_read_configs[] = {
{FLASH_FLAG_READ_1_4_4, FLASH_CMD_READ_1_4_4, 0, 4, 4, 0x00, 2, 4},
{FLASH_FLAG_READ_1_1_4, FLASH_CMD_READ_1_1_4, 0, 1, 4, 0x00, 4, 0},
{FLASH_FLAG_READ_1_2_2, FLASH_CMD_READ_1_2_2, 0, 2, 2, 0x00, 4, 0},
{FLASH_FLAG_READ_1_1_2, FLASH_CMD_READ_1_1_2, 0, 1, 2, 0x00, 0, 8},
{FLASH_FLAG_READ_FAST, FLASH_CMD_READ_FAST, 0, 1, 1, 0x00, 0, 8},
{FLASH_FLAG_READ_WRITE, FLASH_CMD_READ, 0, 1, 1, 0x00, 0, 0},
{FLASH_FLAG_READ_1_4_4, SPINOR_OP_READ_1_4_4, 0, 4, 4, 0x00, 2, 4},
{FLASH_FLAG_READ_1_1_4, SPINOR_OP_READ_1_1_4, 0, 1, 4, 0x00, 4, 0},
{FLASH_FLAG_READ_1_2_2, SPINOR_OP_READ_1_2_2, 0, 2, 2, 0x00, 4, 0},
{FLASH_FLAG_READ_1_1_2, SPINOR_OP_READ_1_1_2, 0, 1, 2, 0x00, 0, 8},
{FLASH_FLAG_READ_FAST, SPINOR_OP_READ_FAST, 0, 1, 1, 0x00, 0, 8},
{FLASH_FLAG_READ_WRITE, SPINOR_OP_READ, 0, 1, 1, 0x00, 0, 0},
{0x00, 0, 0, 0, 0, 0x00, 0, 0},
};
/* Default WRITE configurations */
static struct seq_rw_config default_write_configs[] = {
{FLASH_FLAG_WRITE_1_4_4, FLASH_CMD_WRITE_1_4_4, 1, 4, 4, 0x00, 0, 0},
{FLASH_FLAG_WRITE_1_1_4, FLASH_CMD_WRITE_1_1_4, 1, 1, 4, 0x00, 0, 0},
{FLASH_FLAG_WRITE_1_2_2, FLASH_CMD_WRITE_1_2_2, 1, 2, 2, 0x00, 0, 0},
{FLASH_FLAG_WRITE_1_1_2, FLASH_CMD_WRITE_1_1_2, 1, 1, 2, 0x00, 0, 0},
{FLASH_FLAG_READ_WRITE, FLASH_CMD_WRITE, 1, 1, 1, 0x00, 0, 0},
{FLASH_FLAG_WRITE_1_4_4, SPINOR_OP_WRITE_1_4_4, 1, 4, 4, 0x00, 0, 0},
{FLASH_FLAG_WRITE_1_1_4, SPINOR_OP_WRITE_1_1_4, 1, 1, 4, 0x00, 0, 0},
{FLASH_FLAG_WRITE_1_2_2, SPINOR_OP_WRITE_1_2_2, 1, 2, 2, 0x00, 0, 0},
{FLASH_FLAG_WRITE_1_1_2, SPINOR_OP_WRITE_1_1_2, 1, 1, 2, 0x00, 0, 0},
{FLASH_FLAG_READ_WRITE, SPINOR_OP_WRITE, 1, 1, 1, 0x00, 0, 0},
{0x00, 0, 0, 0, 0, 0x00, 0, 0},
};
......@@ -511,12 +490,12 @@ static struct seq_rw_config default_write_configs[] = {
* cycles.
*/
static struct seq_rw_config n25q_read3_configs[] = {
{FLASH_FLAG_READ_1_4_4, FLASH_CMD_READ_1_4_4, 0, 4, 4, 0x00, 0, 8},
{FLASH_FLAG_READ_1_1_4, FLASH_CMD_READ_1_1_4, 0, 1, 4, 0x00, 0, 8},
{FLASH_FLAG_READ_1_2_2, FLASH_CMD_READ_1_2_2, 0, 2, 2, 0x00, 0, 8},
{FLASH_FLAG_READ_1_1_2, FLASH_CMD_READ_1_1_2, 0, 1, 2, 0x00, 0, 8},
{FLASH_FLAG_READ_FAST, FLASH_CMD_READ_FAST, 0, 1, 1, 0x00, 0, 8},
{FLASH_FLAG_READ_WRITE, FLASH_CMD_READ, 0, 1, 1, 0x00, 0, 0},
{FLASH_FLAG_READ_1_4_4, SPINOR_OP_READ_1_4_4, 0, 4, 4, 0x00, 0, 8},
{FLASH_FLAG_READ_1_1_4, SPINOR_OP_READ_1_1_4, 0, 1, 4, 0x00, 0, 8},
{FLASH_FLAG_READ_1_2_2, SPINOR_OP_READ_1_2_2, 0, 2, 2, 0x00, 0, 8},
{FLASH_FLAG_READ_1_1_2, SPINOR_OP_READ_1_1_2, 0, 1, 2, 0x00, 0, 8},
{FLASH_FLAG_READ_FAST, SPINOR_OP_READ_FAST, 0, 1, 1, 0x00, 0, 8},
{FLASH_FLAG_READ_WRITE, SPINOR_OP_READ, 0, 1, 1, 0x00, 0, 0},
{0x00, 0, 0, 0, 0, 0x00, 0, 0},
};
......@@ -526,12 +505,12 @@ static struct seq_rw_config n25q_read3_configs[] = {
* - 'FAST' variants configured for 8 dummy cycles (see note above.)
*/
static struct seq_rw_config n25q_read4_configs[] = {
{FLASH_FLAG_READ_1_4_4, FLASH_CMD_READ4_1_4_4, 0, 4, 4, 0x00, 0, 8},
{FLASH_FLAG_READ_1_1_4, FLASH_CMD_READ4_1_1_4, 0, 1, 4, 0x00, 0, 8},
{FLASH_FLAG_READ_1_2_2, FLASH_CMD_READ4_1_2_2, 0, 2, 2, 0x00, 0, 8},
{FLASH_FLAG_READ_1_1_2, FLASH_CMD_READ4_1_1_2, 0, 1, 2, 0x00, 0, 8},
{FLASH_FLAG_READ_FAST, FLASH_CMD_READ4_FAST, 0, 1, 1, 0x00, 0, 8},
{FLASH_FLAG_READ_WRITE, FLASH_CMD_READ4, 0, 1, 1, 0x00, 0, 0},
{FLASH_FLAG_READ_1_4_4, SPINOR_OP_READ4_1_4_4, 0, 4, 4, 0x00, 0, 8},
{FLASH_FLAG_READ_1_1_4, SPINOR_OP_READ4_1_1_4, 0, 1, 4, 0x00, 0, 8},
{FLASH_FLAG_READ_1_2_2, SPINOR_OP_READ4_1_2_2, 0, 2, 2, 0x00, 0, 8},
{FLASH_FLAG_READ_1_1_2, SPINOR_OP_READ4_1_1_2, 0, 1, 2, 0x00, 0, 8},
{FLASH_FLAG_READ_FAST, SPINOR_OP_READ4_FAST, 0, 1, 1, 0x00, 0, 8},
{FLASH_FLAG_READ_WRITE, SPINOR_OP_READ4, 0, 1, 1, 0x00, 0, 0},
{0x00, 0, 0, 0, 0, 0x00, 0, 0},
};
......@@ -544,7 +523,7 @@ static int stfsm_mx25_en_32bit_addr_seq(struct stfsm_seq *seq)
{
seq->seq_opc[0] = (SEQ_OPC_PADS_1 |
SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_EN4B_ADDR) |
SEQ_OPC_OPCODE(SPINOR_OP_EN4B) |
SEQ_OPC_CSDEASSERT);
seq->seq[0] = STFSM_INST_CMD1;
......@@ -572,12 +551,12 @@ static int stfsm_mx25_en_32bit_addr_seq(struct stfsm_seq *seq)
* entering a state that is incompatible with the SPIBoot Controller.
*/
static struct seq_rw_config stfsm_s25fl_read4_configs[] = {
{FLASH_FLAG_READ_1_4_4, FLASH_CMD_READ4_1_4_4, 0, 4, 4, 0x00, 2, 4},
{FLASH_FLAG_READ_1_1_4, FLASH_CMD_READ4_1_1_4, 0, 1, 4, 0x00, 0, 8},
{FLASH_FLAG_READ_1_2_2, FLASH_CMD_READ4_1_2_2, 0, 2, 2, 0x00, 4, 0},
{FLASH_FLAG_READ_1_1_2, FLASH_CMD_READ4_1_1_2, 0, 1, 2, 0x00, 0, 8},
{FLASH_FLAG_READ_FAST, FLASH_CMD_READ4_FAST, 0, 1, 1, 0x00, 0, 8},
{FLASH_FLAG_READ_WRITE, FLASH_CMD_READ4, 0, 1, 1, 0x00, 0, 0},
{FLASH_FLAG_READ_1_4_4, SPINOR_OP_READ4_1_4_4, 0, 4, 4, 0x00, 2, 4},
{FLASH_FLAG_READ_1_1_4, SPINOR_OP_READ4_1_1_4, 0, 1, 4, 0x00, 0, 8},
{FLASH_FLAG_READ_1_2_2, SPINOR_OP_READ4_1_2_2, 0, 2, 2, 0x00, 4, 0},
{FLASH_FLAG_READ_1_1_2, SPINOR_OP_READ4_1_1_2, 0, 1, 2, 0x00, 0, 8},
{FLASH_FLAG_READ_FAST, SPINOR_OP_READ4_FAST, 0, 1, 1, 0x00, 0, 8},
{FLASH_FLAG_READ_WRITE, SPINOR_OP_READ4, 0, 1, 1, 0x00, 0, 0},
{0x00, 0, 0, 0, 0, 0x00, 0, 0},
};
......@@ -590,13 +569,13 @@ static struct seq_rw_config stfsm_s25fl_write4_configs[] = {
/*
* [W25Qxxx] Configuration
*/
#define W25Q_STATUS_QE (0x1 << 9)
#define W25Q_STATUS_QE (0x1 << 1)
static struct stfsm_seq stfsm_seq_read_jedec = {
.data_size = TRANSFER_SIZE(8),
.seq_opc[0] = (SEQ_OPC_PADS_1 |
SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_RDID)),
SEQ_OPC_OPCODE(SPINOR_OP_RDID)),
.seq = {
STFSM_INST_CMD1,
STFSM_INST_DATA_READ,
......@@ -612,7 +591,7 @@ static struct stfsm_seq stfsm_seq_read_status_fifo = {
.data_size = TRANSFER_SIZE(4),
.seq_opc[0] = (SEQ_OPC_PADS_1 |
SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_RDSR)),
SEQ_OPC_OPCODE(SPINOR_OP_RDSR)),
.seq = {
STFSM_INST_CMD1,
STFSM_INST_DATA_READ,
......@@ -628,10 +607,10 @@ static struct stfsm_seq stfsm_seq_erase_sector = {
/* 'addr_cfg' configured during initialisation */
.seq_opc = {
(SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_WREN) | SEQ_OPC_CSDEASSERT),
SEQ_OPC_OPCODE(SPINOR_OP_WREN) | SEQ_OPC_CSDEASSERT),
(SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_SE)),
SEQ_OPC_OPCODE(SPINOR_OP_SE)),
},
.seq = {
STFSM_INST_CMD1,
......@@ -649,10 +628,10 @@ static struct stfsm_seq stfsm_seq_erase_sector = {
static struct stfsm_seq stfsm_seq_erase_chip = {
.seq_opc = {
(SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_WREN) | SEQ_OPC_CSDEASSERT),
SEQ_OPC_OPCODE(SPINOR_OP_WREN) | SEQ_OPC_CSDEASSERT),
(SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_CHIPERASE) | SEQ_OPC_CSDEASSERT),
SEQ_OPC_OPCODE(SPINOR_OP_CHIP_ERASE) | SEQ_OPC_CSDEASSERT),
},
.seq = {
STFSM_INST_CMD1,
......@@ -669,26 +648,9 @@ static struct stfsm_seq stfsm_seq_erase_chip = {
static struct stfsm_seq stfsm_seq_write_status = {
.seq_opc[0] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_WREN) | SEQ_OPC_CSDEASSERT),
.seq_opc[1] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_WRSR)),
.seq = {
STFSM_INST_CMD1,
STFSM_INST_CMD2,
STFSM_INST_STA_WR1,
STFSM_INST_STOP,
},
.seq_cfg = (SEQ_CFG_PADS_1 |
SEQ_CFG_READNOTWRITE |
SEQ_CFG_CSDEASSERT |
SEQ_CFG_STARTSEQ),
};
static struct stfsm_seq stfsm_seq_wrvcr = {
.seq_opc[0] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_WREN) | SEQ_OPC_CSDEASSERT),
SEQ_OPC_OPCODE(SPINOR_OP_WREN) | SEQ_OPC_CSDEASSERT),
.seq_opc[1] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_WRVCR)),
SEQ_OPC_OPCODE(SPINOR_OP_WRSR)),
.seq = {
STFSM_INST_CMD1,
STFSM_INST_CMD2,
......@@ -704,9 +666,9 @@ static struct stfsm_seq stfsm_seq_wrvcr = {
static int stfsm_n25q_en_32bit_addr_seq(struct stfsm_seq *seq)
{
seq->seq_opc[0] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_EN4B_ADDR));
SEQ_OPC_OPCODE(SPINOR_OP_EN4B));
seq->seq_opc[1] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_WREN) |
SEQ_OPC_OPCODE(SPINOR_OP_WREN) |
SEQ_OPC_CSDEASSERT);
seq->seq[0] = STFSM_INST_CMD2;
......@@ -793,7 +755,7 @@ static void stfsm_read_fifo(struct stfsm *fsm, uint32_t *buf, uint32_t size)
dev_dbg(fsm->dev, "Reading %d bytes from FIFO\n", size);
BUG_ON((((uint32_t)buf) & 0x3) || (size & 0x3));
BUG_ON((((uintptr_t)buf) & 0x3) || (size & 0x3));
while (remaining) {
for (;;) {
......@@ -817,7 +779,7 @@ static int stfsm_write_fifo(struct stfsm *fsm, const uint32_t *buf,
dev_dbg(fsm->dev, "writing %d bytes to FIFO\n", size);
BUG_ON((((uint32_t)buf) & 0x3) || (size & 0x3));
BUG_ON((((uintptr_t)buf) & 0x3) || (size & 0x3));
writesl(fsm->base + SPI_FAST_SEQ_DATA_REG, buf, words);
......@@ -827,7 +789,7 @@ static int stfsm_write_fifo(struct stfsm *fsm, const uint32_t *buf,
static int stfsm_enter_32bit_addr(struct stfsm *fsm, int enter)
{
struct stfsm_seq *seq = &fsm->stfsm_seq_en_32bit_addr;
uint32_t cmd = enter ? FLASH_CMD_EN4B_ADDR : FLASH_CMD_EX4B_ADDR;
uint32_t cmd = enter ? SPINOR_OP_EN4B : SPINOR_OP_EX4B;
seq->seq_opc[0] = (SEQ_OPC_PADS_1 |
SEQ_OPC_CYCLES(8) |
......@@ -851,7 +813,7 @@ static uint8_t stfsm_wait_busy(struct stfsm *fsm)
/* Use RDRS1 */
seq->seq_opc[0] = (SEQ_OPC_PADS_1 |
SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_RDSR));
SEQ_OPC_OPCODE(SPINOR_OP_RDSR));
/* Load read_status sequence */
stfsm_load_seq(fsm, seq);
......@@ -889,60 +851,57 @@ static uint8_t stfsm_wait_busy(struct stfsm *fsm)
}
static int stfsm_read_status(struct stfsm *fsm, uint8_t cmd,
uint8_t *status)
uint8_t *data, int bytes)
{
struct stfsm_seq *seq = &stfsm_seq_read_status_fifo;
uint32_t tmp;
uint8_t *t = (uint8_t *)&tmp;
int i;
dev_dbg(fsm->dev, "reading STA[%s]\n",
(cmd == FLASH_CMD_RDSR) ? "1" : "2");
dev_dbg(fsm->dev, "read 'status' register [0x%02x], %d byte(s)\n",
cmd, bytes);
seq->seq_opc[0] = (SEQ_OPC_PADS_1 |
SEQ_OPC_CYCLES(8) |
BUG_ON(bytes != 1 && bytes != 2);
seq->seq_opc[0] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(cmd)),
stfsm_load_seq(fsm, seq);
stfsm_read_fifo(fsm, &tmp, 4);
*status = (uint8_t)(tmp >> 24);
for (i = 0; i < bytes; i++)
data[i] = t[i];
stfsm_wait_seq(fsm);
return 0;
}
static int stfsm_write_status(struct stfsm *fsm, uint16_t status,
int sta_bytes)
static int stfsm_write_status(struct stfsm *fsm, uint8_t cmd,
uint16_t data, int bytes, int wait_busy)
{
struct stfsm_seq *seq = &stfsm_seq_write_status;
dev_dbg(fsm->dev, "writing STA[%s] 0x%04x\n",
(sta_bytes == 1) ? "1" : "1+2", status);
dev_dbg(fsm->dev,
"write 'status' register [0x%02x], %d byte(s), 0x%04x\n"
" %s wait-busy\n", cmd, bytes, data, wait_busy ? "with" : "no");
seq->status = (uint32_t)status | STA_PADS_1 | STA_CSDEASSERT;
seq->seq[2] = (sta_bytes == 1) ?
STFSM_INST_STA_WR1 : STFSM_INST_STA_WR1_2;
BUG_ON(bytes != 1 && bytes != 2);
stfsm_load_seq(fsm, seq);
stfsm_wait_seq(fsm);
return 0;
};
static int stfsm_wrvcr(struct stfsm *fsm, uint8_t data)
{
struct stfsm_seq *seq = &stfsm_seq_wrvcr;
dev_dbg(fsm->dev, "writing VCR 0x%02x\n", data);
seq->seq_opc[1] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(cmd));
seq->status = (STA_DATA_BYTE1(data) | STA_PADS_1 | STA_CSDEASSERT);
seq->status = (uint32_t)data | STA_PADS_1 | STA_CSDEASSERT;
seq->seq[2] = (bytes == 1) ? STFSM_INST_STA_WR1 : STFSM_INST_STA_WR1_2;
stfsm_load_seq(fsm, seq);
stfsm_wait_seq(fsm);
if (wait_busy)
stfsm_wait_busy(fsm);
return 0;
}
......@@ -1027,7 +986,7 @@ static void stfsm_prepare_rw_seq(struct stfsm *fsm,
if (cfg->write)
seq->seq_opc[i++] = (SEQ_OPC_PADS_1 |
SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_WREN) |
SEQ_OPC_OPCODE(SPINOR_OP_WREN) |
SEQ_OPC_CSDEASSERT);
/* Address configuration (24 or 32-bit addresses) */
......@@ -1149,31 +1108,36 @@ static int stfsm_mx25_config(struct stfsm *fsm)
stfsm_mx25_en_32bit_addr_seq(&fsm->stfsm_seq_en_32bit_addr);
soc_reset = stfsm_can_handle_soc_reset(fsm);
if (soc_reset || !fsm->booted_from_spi) {
if (soc_reset || !fsm->booted_from_spi)
/* If we can handle SoC resets, we enable 32-bit address
* mode pervasively */
stfsm_enter_32bit_addr(fsm, 1);
} else {
else
/* Else, enable/disable 32-bit addressing before/after
* each operation */
fsm->configuration = (CFG_READ_TOGGLE_32BIT_ADDR |
CFG_WRITE_TOGGLE_32BIT_ADDR |
CFG_ERASESEC_TOGGLE_32BIT_ADDR);
/* It seems a small delay is required after exiting
* 32-bit mode following a write operation. The issue
* is under investigation.
*/
fsm->configuration |= CFG_WRITE_EX_32BIT_ADDR_DELAY;
}
}
/* For QUAD mode, set 'QE' STATUS bit */
/* Check status of 'QE' bit, update if required. */
stfsm_read_status(fsm, SPINOR_OP_RDSR, &sta, 1);
data_pads = ((fsm->stfsm_seq_read.seq_cfg >> 16) & 0x3) + 1;
if (data_pads == 4) {
stfsm_read_status(fsm, FLASH_CMD_RDSR, &sta);
if (!(sta & MX25_STATUS_QE)) {
/* Set 'QE' */
sta |= MX25_STATUS_QE;
stfsm_write_status(fsm, sta, 1);
stfsm_write_status(fsm, SPINOR_OP_WRSR, sta, 1, 1);
}
} else {
if (sta & MX25_STATUS_QE) {
/* Clear 'QE' */
sta &= ~MX25_STATUS_QE;
stfsm_write_status(fsm, SPINOR_OP_WRSR, sta, 1, 1);
}
}
return 0;
......@@ -1239,7 +1203,7 @@ static int stfsm_n25q_config(struct stfsm *fsm)
*/
vcr = (N25Q_VCR_DUMMY_CYCLES(8) | N25Q_VCR_XIP_DISABLED |
N25Q_VCR_WRAP_CONT);
stfsm_wrvcr(fsm, vcr);
stfsm_write_status(fsm, N25Q_CMD_WRVCR, vcr, 1, 0);
return 0;
}
......@@ -1297,7 +1261,7 @@ static void stfsm_s25fl_write_dyb(struct stfsm *fsm, uint32_t offs, uint8_t dby)
{
struct stfsm_seq seq = {
.seq_opc[0] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_WREN) |
SEQ_OPC_OPCODE(SPINOR_OP_WREN) |
SEQ_OPC_CSDEASSERT),
.seq_opc[1] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(S25FL_CMD_DYBWR)),
......@@ -1337,7 +1301,7 @@ static int stfsm_s25fl_clear_status_reg(struct stfsm *fsm)
SEQ_OPC_CSDEASSERT),
.seq_opc[1] = (SEQ_OPC_PADS_1 |
SEQ_OPC_CYCLES(8) |
SEQ_OPC_OPCODE(FLASH_CMD_WRDI) |
SEQ_OPC_OPCODE(SPINOR_OP_WRDI) |
SEQ_OPC_CSDEASSERT),
.seq = {
STFSM_INST_CMD1,
......@@ -1367,6 +1331,7 @@ static int stfsm_s25fl_config(struct stfsm *fsm)
uint32_t offs;
uint16_t sta_wr;
uint8_t sr1, cr1, dyb;
int update_sr = 0;
int ret;
if (flags & FLASH_FLAG_32BIT_ADDR) {
......@@ -1414,34 +1379,28 @@ static int stfsm_s25fl_config(struct stfsm *fsm)
}
}
/* Check status of 'QE' bit */
/* Check status of 'QE' bit, update if required. */
stfsm_read_status(fsm, SPINOR_OP_RDSR2, &cr1, 1);
data_pads = ((fsm->stfsm_seq_read.seq_cfg >> 16) & 0x3) + 1;
stfsm_read_status(fsm, FLASH_CMD_RDSR2, &cr1);
if (data_pads == 4) {
if (!(cr1 & STFSM_S25FL_CONFIG_QE)) {
/* Set 'QE' */
cr1 |= STFSM_S25FL_CONFIG_QE;
stfsm_read_status(fsm, FLASH_CMD_RDSR, &sr1);
sta_wr = ((uint16_t)cr1 << 8) | sr1;
stfsm_write_status(fsm, sta_wr, 2);
stfsm_wait_busy(fsm);
update_sr = 1;
}
} else {
if ((cr1 & STFSM_S25FL_CONFIG_QE)) {
if (cr1 & STFSM_S25FL_CONFIG_QE) {
/* Clear 'QE' */
cr1 &= ~STFSM_S25FL_CONFIG_QE;
stfsm_read_status(fsm, FLASH_CMD_RDSR, &sr1);
sta_wr = ((uint16_t)cr1 << 8) | sr1;
stfsm_write_status(fsm, sta_wr, 2);
stfsm_wait_busy(fsm);
update_sr = 1;
}
}
if (update_sr) {
stfsm_read_status(fsm, SPINOR_OP_RDSR, &sr1, 1);
sta_wr = ((uint16_t)cr1 << 8) | sr1;
stfsm_write_status(fsm, SPINOR_OP_WRSR, sta_wr, 2, 1);
}
/*
......@@ -1456,27 +1415,36 @@ static int stfsm_s25fl_config(struct stfsm *fsm)
static int stfsm_w25q_config(struct stfsm *fsm)
{
uint32_t data_pads;
uint16_t sta_wr;
uint8_t sta1, sta2;
uint8_t sr1, sr2;
uint16_t sr_wr;
int update_sr = 0;
int ret;
ret = stfsm_prepare_rwe_seqs_default(fsm);
if (ret)
return ret;
/* If using QUAD mode, set QE STATUS bit */
/* Check status of 'QE' bit, update if required. */
stfsm_read_status(fsm, SPINOR_OP_RDSR2, &sr2, 1);
data_pads = ((fsm->stfsm_seq_read.seq_cfg >> 16) & 0x3) + 1;
if (data_pads == 4) {
stfsm_read_status(fsm, FLASH_CMD_RDSR, &sta1);
stfsm_read_status(fsm, FLASH_CMD_RDSR2, &sta2);
sta_wr = ((uint16_t)sta2 << 8) | sta1;
sta_wr |= W25Q_STATUS_QE;
stfsm_write_status(fsm, sta_wr, 2);
stfsm_wait_busy(fsm);
if (!(sr2 & W25Q_STATUS_QE)) {
/* Set 'QE' */
sr2 |= W25Q_STATUS_QE;
update_sr = 1;
}
} else {
if (sr2 & W25Q_STATUS_QE) {
/* Clear 'QE' */
sr2 &= ~W25Q_STATUS_QE;
update_sr = 1;
}
}
if (update_sr) {
/* Write status register */
stfsm_read_status(fsm, SPINOR_OP_RDSR, &sr1, 1);
sr_wr = ((uint16_t)sr2 << 8) | sr1;
stfsm_write_status(fsm, SPINOR_OP_WRSR, sr_wr, 2, 1);
}
return 0;
......@@ -1506,7 +1474,7 @@ static int stfsm_read(struct stfsm *fsm, uint8_t *buf, uint32_t size,
read_mask = (data_pads << 2) - 1;
/* Handle non-aligned buf */
p = ((uint32_t)buf & 0x3) ? (uint8_t *)page_buf : buf;
p = ((uintptr_t)buf & 0x3) ? (uint8_t *)page_buf : buf;
/* Handle non-aligned size */
size_ub = (size + read_mask) & ~read_mask;
......@@ -1528,7 +1496,7 @@ static int stfsm_read(struct stfsm *fsm, uint8_t *buf, uint32_t size,
}
/* Handle non-aligned buf */
if ((uint32_t)buf & 0x3)
if ((uintptr_t)buf & 0x3)
memcpy(buf, page_buf, size);
/* Wait for sequence to finish */
......@@ -1570,7 +1538,7 @@ static int stfsm_write(struct stfsm *fsm, const uint8_t *buf,
write_mask = (data_pads << 2) - 1;
/* Handle non-aligned buf */
if ((uint32_t)buf & 0x3) {
if ((uintptr_t)buf & 0x3) {
memcpy(page_buf, buf, size);
p = (uint8_t *)page_buf;
} else {
......@@ -1628,11 +1596,8 @@ static int stfsm_write(struct stfsm *fsm, const uint8_t *buf,
stfsm_s25fl_clear_status_reg(fsm);
/* Exit 32-bit address mode, if required */
if (fsm->configuration & CFG_WRITE_TOGGLE_32BIT_ADDR) {
if (fsm->configuration & CFG_WRITE_TOGGLE_32BIT_ADDR)
stfsm_enter_32bit_addr(fsm, 0);
if (fsm->configuration & CFG_WRITE_EX_32BIT_ADDR_DELAY)
udelay(1);
}
return 0;
}
......@@ -1736,7 +1701,7 @@ static int stfsm_mtd_write(struct mtd_info *mtd, loff_t to, size_t len,
while (len) {
/* Write up to page boundary */
bytes = min(FLASH_PAGESIZE - page_offs, len);
bytes = min_t(size_t, FLASH_PAGESIZE - page_offs, len);
ret = stfsm_write(fsm, b, bytes, to);
if (ret)
......@@ -1935,6 +1900,13 @@ static int stfsm_init(struct stfsm *fsm)
fsm->base + SPI_CONFIGDATA);
writel(STFSM_DEFAULT_WR_TIME, fsm->base + SPI_STATUS_WR_TIME_REG);
/*
* Set the FSM 'WAIT' delay to the minimum workable value. Note, for
* our purposes, the WAIT instruction is used purely to achieve
* "sequence validity" rather than actually implement a delay.
*/
writel(0x00000001, fsm->base + SPI_PROGRAM_ERASE_TIME);
/* Clear FIFO, just in case */
stfsm_clear_fifo(fsm);
......@@ -2086,7 +2058,7 @@ static int stfsm_remove(struct platform_device *pdev)
return mtd_device_unregister(&fsm->mtd);
}
static struct of_device_id stfsm_match[] = {
static const struct of_device_id stfsm_match[] = {
{ .compatible = "st,spi-fsm", },
{},
};
......
menu "LPDDR flash memory drivers"
depends on MTD!=n
menu "LPDDR & LPDDR2 PCM memory drivers"
depends on MTD
config MTD_LPDDR
tristate "Support for LPDDR flash chips"
......@@ -17,4 +17,13 @@ config MTD_QINFO_PROBE
Window QINFO interface, permits software to be used for entire
families of devices. This serves similar purpose of CFI on legacy
Flash products
config MTD_LPDDR2_NVM
# ARM dependency is only for writel_relaxed()
depends on MTD && ARM
tristate "Support for LPDDR2-NVM flash chips"
help
This option enables support of PCM memories with a LPDDR2-NVM
(Low power double data rate 2) interface.
endmenu
......@@ -4,3 +4,4 @@
obj-$(CONFIG_MTD_QINFO_PROBE) += qinfo_probe.o
obj-$(CONFIG_MTD_LPDDR) += lpddr_cmds.o
obj-$(CONFIG_MTD_LPDDR2_NVM) += lpddr2_nvm.o
/*
* LPDDR2-NVM MTD driver. This module provides read, write, erase, lock/unlock
* support for LPDDR2-NVM PCM memories
*
* Copyright © 2012 Micron Technology, Inc.
*
* Vincenzo Aliberti <vincenzo.aliberti@gmail.com>
* Domenico Manna <domenico.manna@gmail.com>
* Many thanks to Andrea Vigilante for initial enabling
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": %s: " fmt, __func__
#include <linux/init.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/mtd/map.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
#include <linux/slab.h>
#include <linux/platform_device.h>
#include <linux/ioport.h>
#include <linux/err.h>
/* Parameters */
#define ERASE_BLOCKSIZE (0x00020000/2) /* in Word */
#define WRITE_BUFFSIZE (0x00000400/2) /* in Word */
#define OW_BASE_ADDRESS 0x00000000 /* OW offset */
#define BUS_WIDTH 0x00000020 /* x32 devices */
/* PFOW symbols address offset */
#define PFOW_QUERY_STRING_P (0x0000/2) /* in Word */
#define PFOW_QUERY_STRING_F (0x0002/2) /* in Word */
#define PFOW_QUERY_STRING_O (0x0004/2) /* in Word */
#define PFOW_QUERY_STRING_W (0x0006/2) /* in Word */
/* OW registers address */
#define CMD_CODE_OFS (0x0080/2) /* in Word */
#define CMD_DATA_OFS (0x0084/2) /* in Word */
#define CMD_ADD_L_OFS (0x0088/2) /* in Word */
#define CMD_ADD_H_OFS (0x008A/2) /* in Word */
#define MPR_L_OFS (0x0090/2) /* in Word */
#define MPR_H_OFS (0x0092/2) /* in Word */
#define CMD_EXEC_OFS (0x00C0/2) /* in Word */
#define STATUS_REG_OFS (0x00CC/2) /* in Word */
#define PRG_BUFFER_OFS (0x0010/2) /* in Word */
/* Datamask */
#define MR_CFGMASK 0x8000
#define SR_OK_DATAMASK 0x0080
/* LPDDR2-NVM Commands */
#define LPDDR2_NVM_LOCK 0x0061
#define LPDDR2_NVM_UNLOCK 0x0062
#define LPDDR2_NVM_SW_PROGRAM 0x0041
#define LPDDR2_NVM_SW_OVERWRITE 0x0042
#define LPDDR2_NVM_BUF_PROGRAM 0x00E9
#define LPDDR2_NVM_BUF_OVERWRITE 0x00EA
#define LPDDR2_NVM_ERASE 0x0020
/* LPDDR2-NVM Registers offset */
#define LPDDR2_MODE_REG_DATA 0x0040
#define LPDDR2_MODE_REG_CFG 0x0050
/*
* Internal Type Definitions
* pcm_int_data contains memory controller details:
* @reg_data : LPDDR2_MODE_REG_DATA register address after remapping
* @reg_cfg : LPDDR2_MODE_REG_CFG register address after remapping
* &bus_width: memory bus-width (eg: x16 2 Bytes, x32 4 Bytes)
*/
struct pcm_int_data {
void __iomem *ctl_regs;
int bus_width;
};
static DEFINE_MUTEX(lpdd2_nvm_mutex);
/*
* Build a map_word starting from an u_long
*/
static inline map_word build_map_word(u_long myword)
{
map_word val = { {0} };
val.x[0] = myword;
return val;
}
/*
* Build Mode Register Configuration DataMask based on device bus-width
*/
static inline u_int build_mr_cfgmask(u_int bus_width)
{
u_int val = MR_CFGMASK;
if (bus_width == 0x0004) /* x32 device */
val = val << 16;
return val;
}
/*
* Build Status Register OK DataMask based on device bus-width
*/
static inline u_int build_sr_ok_datamask(u_int bus_width)
{
u_int val = SR_OK_DATAMASK;
if (bus_width == 0x0004) /* x32 device */
val = (val << 16)+val;
return val;
}
/*
* Evaluates Overlay Window Control Registers address
*/
static inline u_long ow_reg_add(struct map_info *map, u_long offset)
{
u_long val = 0;
struct pcm_int_data *pcm_data = map->fldrv_priv;
val = map->pfow_base + offset*pcm_data->bus_width;
return val;
}
/*
* Enable lpddr2-nvm Overlay Window
* Overlay Window is a memory mapped area containing all LPDDR2-NVM registers
* used by device commands as well as uservisible resources like Device Status
* Register, Device ID, etc
*/
static inline void ow_enable(struct map_info *map)
{
struct pcm_int_data *pcm_data = map->fldrv_priv;
writel_relaxed(build_mr_cfgmask(pcm_data->bus_width) | 0x18,
pcm_data->ctl_regs + LPDDR2_MODE_REG_CFG);
writel_relaxed(0x01, pcm_data->ctl_regs + LPDDR2_MODE_REG_DATA);
}
/*
* Disable lpddr2-nvm Overlay Window
* Overlay Window is a memory mapped area containing all LPDDR2-NVM registers
* used by device commands as well as uservisible resources like Device Status
* Register, Device ID, etc
*/
static inline void ow_disable(struct map_info *map)
{
struct pcm_int_data *pcm_data = map->fldrv_priv;
writel_relaxed(build_mr_cfgmask(pcm_data->bus_width) | 0x18,
pcm_data->ctl_regs + LPDDR2_MODE_REG_CFG);
writel_relaxed(0x02, pcm_data->ctl_regs + LPDDR2_MODE_REG_DATA);
}
/*
* Execute lpddr2-nvm operations
*/
static int lpddr2_nvm_do_op(struct map_info *map, u_long cmd_code,
u_long cmd_data, u_long cmd_add, u_long cmd_mpr, u_char *buf)
{
map_word add_l = { {0} }, add_h = { {0} }, mpr_l = { {0} },
mpr_h = { {0} }, data_l = { {0} }, cmd = { {0} },
exec_cmd = { {0} }, sr;
map_word data_h = { {0} }; /* only for 2x x16 devices stacked */
u_long i, status_reg, prg_buff_ofs;
struct pcm_int_data *pcm_data = map->fldrv_priv;
u_int sr_ok_datamask = build_sr_ok_datamask(pcm_data->bus_width);
/* Builds low and high words for OW Control Registers */
add_l.x[0] = cmd_add & 0x0000FFFF;
add_h.x[0] = (cmd_add >> 16) & 0x0000FFFF;
mpr_l.x[0] = cmd_mpr & 0x0000FFFF;
mpr_h.x[0] = (cmd_mpr >> 16) & 0x0000FFFF;
cmd.x[0] = cmd_code & 0x0000FFFF;
exec_cmd.x[0] = 0x0001;
data_l.x[0] = cmd_data & 0x0000FFFF;
data_h.x[0] = (cmd_data >> 16) & 0x0000FFFF; /* only for 2x x16 */
/* Set Overlay Window Control Registers */
map_write(map, cmd, ow_reg_add(map, CMD_CODE_OFS));
map_write(map, data_l, ow_reg_add(map, CMD_DATA_OFS));
map_write(map, add_l, ow_reg_add(map, CMD_ADD_L_OFS));
map_write(map, add_h, ow_reg_add(map, CMD_ADD_H_OFS));
map_write(map, mpr_l, ow_reg_add(map, MPR_L_OFS));
map_write(map, mpr_h, ow_reg_add(map, MPR_H_OFS));
if (pcm_data->bus_width == 0x0004) { /* 2x16 devices stacked */
map_write(map, cmd, ow_reg_add(map, CMD_CODE_OFS) + 2);
map_write(map, data_h, ow_reg_add(map, CMD_DATA_OFS) + 2);
map_write(map, add_l, ow_reg_add(map, CMD_ADD_L_OFS) + 2);
map_write(map, add_h, ow_reg_add(map, CMD_ADD_H_OFS) + 2);
map_write(map, mpr_l, ow_reg_add(map, MPR_L_OFS) + 2);
map_write(map, mpr_h, ow_reg_add(map, MPR_H_OFS) + 2);
}
/* Fill Program Buffer */
if ((cmd_code == LPDDR2_NVM_BUF_PROGRAM) ||
(cmd_code == LPDDR2_NVM_BUF_OVERWRITE)) {
prg_buff_ofs = (map_read(map,
ow_reg_add(map, PRG_BUFFER_OFS))).x[0];
for (i = 0; i < cmd_mpr; i++) {
map_write(map, build_map_word(buf[i]), map->pfow_base +
prg_buff_ofs + i);
}
}
/* Command Execute */
map_write(map, exec_cmd, ow_reg_add(map, CMD_EXEC_OFS));
if (pcm_data->bus_width == 0x0004) /* 2x16 devices stacked */
map_write(map, exec_cmd, ow_reg_add(map, CMD_EXEC_OFS) + 2);
/* Status Register Check */
do {
sr = map_read(map, ow_reg_add(map, STATUS_REG_OFS));
status_reg = sr.x[0];
if (pcm_data->bus_width == 0x0004) {/* 2x16 devices stacked */
sr = map_read(map, ow_reg_add(map,
STATUS_REG_OFS) + 2);
status_reg += sr.x[0] << 16;
}
} while ((status_reg & sr_ok_datamask) != sr_ok_datamask);
return (((status_reg & sr_ok_datamask) == sr_ok_datamask) ? 0 : -EIO);
}
/*
* Execute lpddr2-nvm operations @ block level
*/
static int lpddr2_nvm_do_block_op(struct mtd_info *mtd, loff_t start_add,
uint64_t len, u_char block_op)
{
struct map_info *map = mtd->priv;
u_long add, end_add;
int ret = 0;
mutex_lock(&lpdd2_nvm_mutex);
ow_enable(map);
add = start_add;
end_add = add + len;
do {
ret = lpddr2_nvm_do_op(map, block_op, 0x00, add, add, NULL);
if (ret)
goto out;
add += mtd->erasesize;
} while (add < end_add);
out:
ow_disable(map);
mutex_unlock(&lpdd2_nvm_mutex);
return ret;
}
/*
* verify presence of PFOW string
*/
static int lpddr2_nvm_pfow_present(struct map_info *map)
{
map_word pfow_val[4];
unsigned int found = 1;
mutex_lock(&lpdd2_nvm_mutex);
ow_enable(map);
/* Load string from array */
pfow_val[0] = map_read(map, ow_reg_add(map, PFOW_QUERY_STRING_P));
pfow_val[1] = map_read(map, ow_reg_add(map, PFOW_QUERY_STRING_F));
pfow_val[2] = map_read(map, ow_reg_add(map, PFOW_QUERY_STRING_O));
pfow_val[3] = map_read(map, ow_reg_add(map, PFOW_QUERY_STRING_W));
/* Verify the string loaded vs expected */
if (!map_word_equal(map, build_map_word('P'), pfow_val[0]))
found = 0;
if (!map_word_equal(map, build_map_word('F'), pfow_val[1]))
found = 0;
if (!map_word_equal(map, build_map_word('O'), pfow_val[2]))
found = 0;
if (!map_word_equal(map, build_map_word('W'), pfow_val[3]))
found = 0;
ow_disable(map);
mutex_unlock(&lpdd2_nvm_mutex);
return found;
}
/*
* lpddr2_nvm driver read method
*/
static int lpddr2_nvm_read(struct mtd_info *mtd, loff_t start_add,
size_t len, size_t *retlen, u_char *buf)
{
struct map_info *map = mtd->priv;
mutex_lock(&lpdd2_nvm_mutex);
*retlen = len;
map_copy_from(map, buf, start_add, *retlen);
mutex_unlock(&lpdd2_nvm_mutex);
return 0;
}
/*
* lpddr2_nvm driver write method
*/
static int lpddr2_nvm_write(struct mtd_info *mtd, loff_t start_add,
size_t len, size_t *retlen, const u_char *buf)
{
struct map_info *map = mtd->priv;
struct pcm_int_data *pcm_data = map->fldrv_priv;
u_long add, current_len, tot_len, target_len, my_data;
u_char *write_buf = (u_char *)buf;
int ret = 0;
mutex_lock(&lpdd2_nvm_mutex);
ow_enable(map);
/* Set start value for the variables */
add = start_add;
target_len = len;
tot_len = 0;
while (tot_len < target_len) {
if (!(IS_ALIGNED(add, mtd->writesize))) { /* do sw program */
my_data = write_buf[tot_len];
my_data += (write_buf[tot_len+1]) << 8;
if (pcm_data->bus_width == 0x0004) {/* 2x16 devices */
my_data += (write_buf[tot_len+2]) << 16;
my_data += (write_buf[tot_len+3]) << 24;
}
ret = lpddr2_nvm_do_op(map, LPDDR2_NVM_SW_OVERWRITE,
my_data, add, 0x00, NULL);
if (ret)
goto out;
add += pcm_data->bus_width;
tot_len += pcm_data->bus_width;
} else { /* do buffer program */
current_len = min(target_len - tot_len,
(u_long) mtd->writesize);
ret = lpddr2_nvm_do_op(map, LPDDR2_NVM_BUF_OVERWRITE,
0x00, add, current_len, write_buf + tot_len);
if (ret)
goto out;
add += current_len;
tot_len += current_len;
}
}
out:
*retlen = tot_len;
ow_disable(map);
mutex_unlock(&lpdd2_nvm_mutex);
return ret;
}
/*
* lpddr2_nvm driver erase method
*/
static int lpddr2_nvm_erase(struct mtd_info *mtd, struct erase_info *instr)
{
int ret = lpddr2_nvm_do_block_op(mtd, instr->addr, instr->len,
LPDDR2_NVM_ERASE);
if (!ret) {
instr->state = MTD_ERASE_DONE;
mtd_erase_callback(instr);
}
return ret;
}
/*
* lpddr2_nvm driver unlock method
*/
static int lpddr2_nvm_unlock(struct mtd_info *mtd, loff_t start_add,
uint64_t len)
{
return lpddr2_nvm_do_block_op(mtd, start_add, len, LPDDR2_NVM_UNLOCK);
}
/*
* lpddr2_nvm driver lock method
*/
static int lpddr2_nvm_lock(struct mtd_info *mtd, loff_t start_add,
uint64_t len)
{
return lpddr2_nvm_do_block_op(mtd, start_add, len, LPDDR2_NVM_LOCK);
}
/*
* lpddr2_nvm driver probe method
*/
static int lpddr2_nvm_probe(struct platform_device *pdev)
{
struct map_info *map;
struct mtd_info *mtd;
struct resource *add_range;
struct resource *control_regs;
struct pcm_int_data *pcm_data;
/* Allocate memory control_regs data structures */
pcm_data = devm_kzalloc(&pdev->dev, sizeof(*pcm_data), GFP_KERNEL);
if (!pcm_data)
return -ENOMEM;
pcm_data->bus_width = BUS_WIDTH;
/* Allocate memory for map_info & mtd_info data structures */
map = devm_kzalloc(&pdev->dev, sizeof(*map), GFP_KERNEL);
if (!map)
return -ENOMEM;
mtd = devm_kzalloc(&pdev->dev, sizeof(*mtd), GFP_KERNEL);
if (!mtd)
return -ENOMEM;
/* lpddr2_nvm address range */
add_range = platform_get_resource(pdev, IORESOURCE_MEM, 0);
/* Populate map_info data structure */
*map = (struct map_info) {
.virt = devm_ioremap_resource(&pdev->dev, add_range),
.name = pdev->dev.init_name,
.phys = add_range->start,
.size = resource_size(add_range),
.bankwidth = pcm_data->bus_width / 2,
.pfow_base = OW_BASE_ADDRESS,
.fldrv_priv = pcm_data,
};
if (IS_ERR(map->virt))
return PTR_ERR(map->virt);
simple_map_init(map); /* fill with default methods */
control_regs = platform_get_resource(pdev, IORESOURCE_MEM, 1);
pcm_data->ctl_regs = devm_ioremap_resource(&pdev->dev, control_regs);
if (IS_ERR(pcm_data->ctl_regs))
return PTR_ERR(pcm_data->ctl_regs);
/* Populate mtd_info data structure */
*mtd = (struct mtd_info) {
.name = pdev->dev.init_name,
.type = MTD_RAM,
.priv = map,
.size = resource_size(add_range),
.erasesize = ERASE_BLOCKSIZE * pcm_data->bus_width,
.writesize = 1,
.writebufsize = WRITE_BUFFSIZE * pcm_data->bus_width,
.flags = (MTD_CAP_NVRAM | MTD_POWERUP_LOCK),
._read = lpddr2_nvm_read,
._write = lpddr2_nvm_write,
._erase = lpddr2_nvm_erase,
._unlock = lpddr2_nvm_unlock,
._lock = lpddr2_nvm_lock,
};
/* Verify the presence of the device looking for PFOW string */
if (!lpddr2_nvm_pfow_present(map)) {
pr_err("device not recognized\n");
return -EINVAL;
}
/* Parse partitions and register the MTD device */
return mtd_device_parse_register(mtd, NULL, NULL, NULL, 0);
}
/*
* lpddr2_nvm driver remove method
*/
static int lpddr2_nvm_remove(struct platform_device *pdev)
{
return mtd_device_unregister(dev_get_drvdata(&pdev->dev));
}
/* Initialize platform_driver data structure for lpddr2_nvm */
static struct platform_driver lpddr2_nvm_drv = {
.driver = {
.name = "lpddr2_nvm",
},
.probe = lpddr2_nvm_probe,
.remove = lpddr2_nvm_remove,
};
module_platform_driver(lpddr2_nvm_drv);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Vincenzo Aliberti <vincenzo.aliberti@gmail.com>");
MODULE_DESCRIPTION("MTD driver for LPDDR2-NVM PCM memories");
......@@ -108,7 +108,7 @@ config MTD_SUN_UFLASH
config MTD_SC520CDP
tristate "CFI Flash device mapped on AMD SC520 CDP"
depends on X86 && MTD_CFI
depends on (MELAN || COMPILE_TEST) && MTD_CFI
help
The SC520 CDP board has two banks of CFI-compliant chips and one
Dual-in-line JEDEC chip. This 'mapping' driver supports that
......@@ -116,7 +116,7 @@ config MTD_SC520CDP
config MTD_NETSC520
tristate "CFI Flash device mapped on AMD NetSc520"
depends on X86 && MTD_CFI
depends on (MELAN || COMPILE_TEST) && MTD_CFI
help
This enables access routines for the flash chips on the AMD NetSc520
demonstration board. If you have one of these boards and would like
......
......@@ -183,7 +183,7 @@ static const struct sc520_par_table par_table[NUM_FLASH_BANKS] =
static void sc520cdp_setup_par(void)
{
volatile unsigned long __iomem *mmcr;
unsigned long __iomem *mmcr;
unsigned long mmcr_val;
int i, j;
......@@ -203,11 +203,11 @@ static void sc520cdp_setup_par(void)
*/
for(i = 0; i < NUM_FLASH_BANKS; i++) { /* for each par_table entry */
for(j = 0; j < NUM_SC520_PAR; j++) { /* for each PAR register */
mmcr_val = mmcr[SC520_PAR(j)];
mmcr_val = readl(&mmcr[SC520_PAR(j)]);
/* if target device field matches, reprogram the PAR */
if((mmcr_val & SC520_PAR_TRGDEV) == par_table[i].trgdev)
{
mmcr[SC520_PAR(j)] = par_table[i].new_par;
writel(par_table[i].new_par, &mmcr[SC520_PAR(j)]);
break;
}
}
......
......@@ -33,28 +33,6 @@ struct map_info soleng_flash_map = {
static const char * const probes[] = { "RedBoot", "cmdlinepart", NULL };
#ifdef CONFIG_MTD_SUPERH_RESERVE
static struct mtd_partition superh_se_partitions[] = {
/* Reserved for boot code, read-only */
{
.name = "flash_boot",
.offset = 0x00000000,
.size = CONFIG_MTD_SUPERH_RESERVE,
.mask_flags = MTD_WRITEABLE,
},
/* All else is writable (e.g. JFFS) */
{
.name = "Flash FS",
.offset = MTDPART_OFS_NXTBLK,
.size = MTDPART_SIZ_FULL,
}
};
#define NUM_PARTITIONS ARRAY_SIZE(superh_se_partitions)
#else
#define superh_se_partitions NULL
#define NUM_PARTITIONS 0
#endif /* CONFIG_MTD_SUPERH_RESERVE */
static int __init init_soleng_maps(void)
{
/* First probe at offset 0 */
......@@ -92,8 +70,7 @@ static int __init init_soleng_maps(void)
mtd_device_register(eprom_mtd, NULL, 0);
}
mtd_device_parse_register(flash_mtd, probes, NULL,
superh_se_partitions, NUM_PARTITIONS);
mtd_device_parse_register(flash_mtd, probes, NULL, NULL, 0);
return 0;
}
......
......@@ -87,6 +87,9 @@ static int do_blktrans_request(struct mtd_blktrans_ops *tr,
if (req->cmd_type != REQ_TYPE_FS)
return -EIO;
if (req->cmd_flags & REQ_FLUSH)
return tr->flush(dev);
if (blk_rq_pos(req) + blk_rq_cur_sectors(req) >
get_capacity(req->rq_disk))
return -EIO;
......@@ -407,6 +410,9 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
if (!new->rq)
goto error3;
if (tr->flush)
blk_queue_flush(new->rq, REQ_FLUSH);
new->rq->queuedata = new;
blk_queue_logical_block_size(new->rq, tr->blksize);
......
......@@ -568,13 +568,18 @@ static int mtdchar_write_ioctl(struct mtd_info *mtd,
{
struct mtd_write_req req;
struct mtd_oob_ops ops;
void __user *usr_data, *usr_oob;
const void __user *usr_data, *usr_oob;
int ret;
if (copy_from_user(&req, argp, sizeof(req)) ||
!access_ok(VERIFY_READ, req.usr_data, req.len) ||
!access_ok(VERIFY_READ, req.usr_oob, req.ooblen))
if (copy_from_user(&req, argp, sizeof(req)))
return -EFAULT;
usr_data = (const void __user *)(uintptr_t)req.usr_data;
usr_oob = (const void __user *)(uintptr_t)req.usr_oob;
if (!access_ok(VERIFY_READ, usr_data, req.len) ||
!access_ok(VERIFY_READ, usr_oob, req.ooblen))
return -EFAULT;
if (!mtd->_write_oob)
return -EOPNOTSUPP;
......@@ -583,10 +588,7 @@ static int mtdchar_write_ioctl(struct mtd_info *mtd,
ops.ooblen = (size_t)req.ooblen;
ops.ooboffs = 0;
usr_data = (void __user *)(uintptr_t)req.usr_data;
usr_oob = (void __user *)(uintptr_t)req.usr_oob;
if (req.usr_data) {
if (usr_data) {
ops.datbuf = memdup_user(usr_data, ops.len);
if (IS_ERR(ops.datbuf))
return PTR_ERR(ops.datbuf);
......@@ -594,7 +596,7 @@ static int mtdchar_write_ioctl(struct mtd_info *mtd,
ops.datbuf = NULL;
}
if (req.usr_oob) {
if (usr_oob) {
ops.oobbuf = memdup_user(usr_oob, ops.ooblen);
if (IS_ERR(ops.oobbuf)) {
kfree(ops.datbuf);
......
......@@ -679,9 +679,6 @@ static int bf5xx_nand_remove(struct platform_device *pdev)
peripheral_free_list(bfin_nfc_pin_req);
bf5xx_nand_dma_remove(info);
/* free the common resources */
kfree(info);
return 0;
}
......@@ -742,10 +739,10 @@ static int bf5xx_nand_probe(struct platform_device *pdev)
return -EFAULT;
}
info = kzalloc(sizeof(*info), GFP_KERNEL);
info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL);
if (info == NULL) {
err = -ENOMEM;
goto out_err_kzalloc;
goto out_err;
}
platform_set_drvdata(pdev, info);
......@@ -790,7 +787,7 @@ static int bf5xx_nand_probe(struct platform_device *pdev)
/* initialise the hardware */
err = bf5xx_nand_hw_init(info);
if (err)
goto out_err_hw_init;
goto out_err;
/* setup hardware ECC data struct */
if (hardware_ecc) {
......@@ -827,9 +824,7 @@ static int bf5xx_nand_probe(struct platform_device *pdev)
out_err_nand_scan:
bf5xx_nand_dma_remove(info);
out_err_hw_init:
kfree(info);
out_err_kzalloc:
out_err:
peripheral_free_list(bfin_nfc_pin_req);
return err;
......
......@@ -1233,7 +1233,7 @@ static int denali_waitfunc(struct mtd_info *mtd, struct nand_chip *chip)
return status;
}
static void denali_erase(struct mtd_info *mtd, int page)
static int denali_erase(struct mtd_info *mtd, int page)
{
struct denali_nand_info *denali = mtd_to_denali(mtd);
......@@ -1250,8 +1250,7 @@ static void denali_erase(struct mtd_info *mtd, int page)
irq_status = wait_for_irq(denali, INTR_STATUS__ERASE_COMP |
INTR_STATUS__ERASE_FAIL);
denali->status = (irq_status & INTR_STATUS__ERASE_FAIL) ?
NAND_STATUS_FAIL : PASS;
return (irq_status & INTR_STATUS__ERASE_FAIL) ? NAND_STATUS_FAIL : PASS;
}
static void denali_cmdfunc(struct mtd_info *mtd, unsigned int cmd, int col,
......@@ -1584,7 +1583,7 @@ int denali_init(struct denali_nand_info *denali)
denali->nand.ecc.write_page_raw = denali_write_page_raw;
denali->nand.ecc.read_oob = denali_read_oob;
denali->nand.ecc.write_oob = denali_write_oob;
denali->nand.erase_cmd = denali_erase;
denali->nand.erase = denali_erase;
if (nand_scan_tail(&denali->mtd)) {
ret = -ENXIO;
......
......@@ -872,7 +872,7 @@ static int docg4_read_oob(struct mtd_info *mtd, struct nand_chip *nand,
return 0;
}
static void docg4_erase_block(struct mtd_info *mtd, int page)
static int docg4_erase_block(struct mtd_info *mtd, int page)
{
struct nand_chip *nand = mtd->priv;
struct docg4_priv *doc = nand->priv;
......@@ -916,6 +916,8 @@ static void docg4_erase_block(struct mtd_info *mtd, int page)
write_nop(docptr);
poll_status(doc);
write_nop(docptr);
return nand->waitfunc(mtd, nand);
}
static int write_page(struct mtd_info *mtd, struct nand_chip *nand,
......@@ -1236,7 +1238,7 @@ static void __init init_mtd_structs(struct mtd_info *mtd)
nand->block_markbad = docg4_block_markbad;
nand->read_buf = docg4_read_buf;
nand->write_buf = docg4_write_buf16;
nand->erase_cmd = docg4_erase_block;
nand->erase = docg4_erase_block;
nand->ecc.read_page = docg4_read_page;
nand->ecc.write_page = docg4_write_page;
nand->ecc.read_page_raw = docg4_read_page_raw;
......
......@@ -723,6 +723,19 @@ static int fsl_elbc_write_page(struct mtd_info *mtd, struct nand_chip *chip,
return 0;
}
/* ECC will be calculated automatically, and errors will be detected in
* waitfunc.
*/
static int fsl_elbc_write_subpage(struct mtd_info *mtd, struct nand_chip *chip,
uint32_t offset, uint32_t data_len,
const uint8_t *buf, int oob_required)
{
fsl_elbc_write_buf(mtd, buf, mtd->writesize);
fsl_elbc_write_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
}
static int fsl_elbc_chip_init(struct fsl_elbc_mtd *priv)
{
struct fsl_lbc_ctrl *ctrl = priv->ctrl;
......@@ -761,6 +774,7 @@ static int fsl_elbc_chip_init(struct fsl_elbc_mtd *priv)
chip->ecc.read_page = fsl_elbc_read_page;
chip->ecc.write_page = fsl_elbc_write_page;
chip->ecc.write_subpage = fsl_elbc_write_subpage;
/* If CS Base Register selects full hardware ECC then use it */
if ((in_be32(&lbc->bank[priv->bank].br) & BR_DECC) ==
......
......@@ -56,7 +56,7 @@ struct fsl_ifc_nand_ctrl {
struct nand_hw_control controller;
struct fsl_ifc_mtd *chips[FSL_IFC_BANK_COUNT];
u8 __iomem *addr; /* Address of assigned IFC buffer */
void __iomem *addr; /* Address of assigned IFC buffer */
unsigned int page; /* Last page written to / read from */
unsigned int read_bytes;/* Number of bytes read during command */
unsigned int column; /* Saved column from SEQIN */
......@@ -591,6 +591,9 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
* The chip always seems to report that it is
* write-protected, even when it is not.
*/
if (chip->options & NAND_BUSWIDTH_16)
setbits16(ifc_nand_ctrl->addr, NAND_STATUS_WP);
else
setbits8(ifc_nand_ctrl->addr, NAND_STATUS_WP);
return;
......@@ -636,7 +639,7 @@ static void fsl_ifc_write_buf(struct mtd_info *mtd, const u8 *buf, int len)
len = bufsize - ifc_nand_ctrl->index;
}
memcpy_toio(&ifc_nand_ctrl->addr[ifc_nand_ctrl->index], buf, len);
memcpy_toio(ifc_nand_ctrl->addr + ifc_nand_ctrl->index, buf, len);
ifc_nand_ctrl->index += len;
}
......@@ -648,13 +651,16 @@ static uint8_t fsl_ifc_read_byte(struct mtd_info *mtd)
{
struct nand_chip *chip = mtd->priv;
struct fsl_ifc_mtd *priv = chip->priv;
unsigned int offset;
/*
* If there are still bytes in the IFC buffer, then use the
* next byte.
*/
if (ifc_nand_ctrl->index < ifc_nand_ctrl->read_bytes)
return in_8(&ifc_nand_ctrl->addr[ifc_nand_ctrl->index++]);
if (ifc_nand_ctrl->index < ifc_nand_ctrl->read_bytes) {
offset = ifc_nand_ctrl->index++;
return in_8(ifc_nand_ctrl->addr + offset);
}
dev_err(priv->dev, "%s: beyond end of buffer\n", __func__);
return ERR_BYTE;
......@@ -675,8 +681,7 @@ static uint8_t fsl_ifc_read_byte16(struct mtd_info *mtd)
* next byte.
*/
if (ifc_nand_ctrl->index < ifc_nand_ctrl->read_bytes) {
data = in_be16((uint16_t __iomem *)&ifc_nand_ctrl->
addr[ifc_nand_ctrl->index]);
data = in_be16(ifc_nand_ctrl->addr + ifc_nand_ctrl->index);
ifc_nand_ctrl->index += 2;
return (uint8_t) data;
}
......@@ -701,7 +706,7 @@ static void fsl_ifc_read_buf(struct mtd_info *mtd, u8 *buf, int len)
avail = min((unsigned int)len,
ifc_nand_ctrl->read_bytes - ifc_nand_ctrl->index);
memcpy_fromio(buf, &ifc_nand_ctrl->addr[ifc_nand_ctrl->index], avail);
memcpy_fromio(buf, ifc_nand_ctrl->addr + ifc_nand_ctrl->index, avail);
ifc_nand_ctrl->index += avail;
if (len > avail)
......
......@@ -54,7 +54,7 @@
#define MX6Q_BP_BCH_FLASH0LAYOUT0_ECC0 11
#define MX6Q_BM_BCH_FLASH0LAYOUT0_ECC0 (0x1f << MX6Q_BP_BCH_FLASH0LAYOUT0_ECC0)
#define BF_BCH_FLASH0LAYOUT0_ECC0(v, x) \
(GPMI_IS_MX6Q(x) \
(GPMI_IS_MX6(x) \
? (((v) << MX6Q_BP_BCH_FLASH0LAYOUT0_ECC0) \
& MX6Q_BM_BCH_FLASH0LAYOUT0_ECC0) \
: (((v) << BP_BCH_FLASH0LAYOUT0_ECC0) \
......@@ -65,7 +65,7 @@
#define MX6Q_BM_BCH_FLASH0LAYOUT0_GF_13_14 \
(0x1 << MX6Q_BP_BCH_FLASH0LAYOUT0_GF_13_14)
#define BF_BCH_FLASH0LAYOUT0_GF(v, x) \
((GPMI_IS_MX6Q(x) && ((v) == 14)) \
((GPMI_IS_MX6(x) && ((v) == 14)) \
? (((1) << MX6Q_BP_BCH_FLASH0LAYOUT0_GF_13_14) \
& MX6Q_BM_BCH_FLASH0LAYOUT0_GF_13_14) \
: 0 \
......@@ -77,7 +77,7 @@
#define MX6Q_BM_BCH_FLASH0LAYOUT0_DATA0_SIZE \
(0x3ff << BP_BCH_FLASH0LAYOUT0_DATA0_SIZE)
#define BF_BCH_FLASH0LAYOUT0_DATA0_SIZE(v, x) \
(GPMI_IS_MX6Q(x) \
(GPMI_IS_MX6(x) \
? (((v) >> 2) & MX6Q_BM_BCH_FLASH0LAYOUT0_DATA0_SIZE) \
: ((v) & BM_BCH_FLASH0LAYOUT0_DATA0_SIZE) \
)
......@@ -96,7 +96,7 @@
#define MX6Q_BP_BCH_FLASH0LAYOUT1_ECCN 11
#define MX6Q_BM_BCH_FLASH0LAYOUT1_ECCN (0x1f << MX6Q_BP_BCH_FLASH0LAYOUT1_ECCN)
#define BF_BCH_FLASH0LAYOUT1_ECCN(v, x) \
(GPMI_IS_MX6Q(x) \
(GPMI_IS_MX6(x) \
? (((v) << MX6Q_BP_BCH_FLASH0LAYOUT1_ECCN) \
& MX6Q_BM_BCH_FLASH0LAYOUT1_ECCN) \
: (((v) << BP_BCH_FLASH0LAYOUT1_ECCN) \
......@@ -107,7 +107,7 @@
#define MX6Q_BM_BCH_FLASH0LAYOUT1_GF_13_14 \
(0x1 << MX6Q_BP_BCH_FLASH0LAYOUT1_GF_13_14)
#define BF_BCH_FLASH0LAYOUT1_GF(v, x) \
((GPMI_IS_MX6Q(x) && ((v) == 14)) \
((GPMI_IS_MX6(x) && ((v) == 14)) \
? (((1) << MX6Q_BP_BCH_FLASH0LAYOUT1_GF_13_14) \
& MX6Q_BM_BCH_FLASH0LAYOUT1_GF_13_14) \
: 0 \
......@@ -119,7 +119,7 @@
#define MX6Q_BM_BCH_FLASH0LAYOUT1_DATAN_SIZE \
(0x3ff << BP_BCH_FLASH0LAYOUT1_DATAN_SIZE)
#define BF_BCH_FLASH0LAYOUT1_DATAN_SIZE(v, x) \
(GPMI_IS_MX6Q(x) \
(GPMI_IS_MX6(x) \
? (((v) >> 2) & MX6Q_BM_BCH_FLASH0LAYOUT1_DATAN_SIZE) \
: ((v) & BM_BCH_FLASH0LAYOUT1_DATAN_SIZE) \
)
......
......@@ -861,7 +861,7 @@ static void gpmi_compute_edo_timing(struct gpmi_nand_data *this,
struct resources *r = &this->resources;
unsigned long rate = clk_get_rate(r->clock[0]);
int mode = this->timing_mode;
int dll_threshold = 16; /* in ns */
int dll_threshold = this->devdata->max_chain_delay;
unsigned long delay;
unsigned long clk_period;
int t_rea;
......@@ -886,9 +886,6 @@ static void gpmi_compute_edo_timing(struct gpmi_nand_data *this,
/* [3] for GPMI_HW_GPMI_CTRL1 */
hw->wrn_dly_sel = BV_GPMI_CTRL1_WRN_DLY_SEL_NO_DELAY;
if (GPMI_IS_MX6Q(this))
dll_threshold = 12;
/*
* Enlarge 10 times for the numerator and denominator in {3}.
* This make us to get more accurate result.
......@@ -974,7 +971,7 @@ int gpmi_extra_init(struct gpmi_nand_data *this)
struct nand_chip *chip = &this->nand;
/* Enable the asynchronous EDO feature. */
if (GPMI_IS_MX6Q(this) && chip->onfi_version) {
if (GPMI_IS_MX6(this) && chip->onfi_version) {
int mode = onfi_get_async_timing_mode(chip);
/* We only support the timing mode 4 and mode 5. */
......@@ -1096,12 +1093,12 @@ int gpmi_is_ready(struct gpmi_nand_data *this, unsigned chip)
if (GPMI_IS_MX23(this)) {
mask = MX23_BM_GPMI_DEBUG_READY0 << chip;
reg = readl(r->gpmi_regs + HW_GPMI_DEBUG);
} else if (GPMI_IS_MX28(this) || GPMI_IS_MX6Q(this)) {
} else if (GPMI_IS_MX28(this) || GPMI_IS_MX6(this)) {
/*
* In the imx6, all the ready/busy pins are bound
* together. So we only need to check chip 0.
*/
if (GPMI_IS_MX6Q(this))
if (GPMI_IS_MX6(this))
chip = 0;
/* MX28 shares the same R/B register as MX6Q. */
......
......@@ -53,6 +53,30 @@ static struct nand_ecclayout gpmi_hw_ecclayout = {
.oobfree = { {.offset = 0, .length = 0} }
};
static const struct gpmi_devdata gpmi_devdata_imx23 = {
.type = IS_MX23,
.bch_max_ecc_strength = 20,
.max_chain_delay = 16,
};
static const struct gpmi_devdata gpmi_devdata_imx28 = {
.type = IS_MX28,
.bch_max_ecc_strength = 20,
.max_chain_delay = 16,
};
static const struct gpmi_devdata gpmi_devdata_imx6q = {
.type = IS_MX6Q,
.bch_max_ecc_strength = 40,
.max_chain_delay = 12,
};
static const struct gpmi_devdata gpmi_devdata_imx6sx = {
.type = IS_MX6SX,
.bch_max_ecc_strength = 62,
.max_chain_delay = 12,
};
static irqreturn_t bch_irq(int irq, void *cookie)
{
struct gpmi_nand_data *this = cookie;
......@@ -102,14 +126,8 @@ static inline bool gpmi_check_ecc(struct gpmi_nand_data *this)
/* The mx23/mx28 only support the GF13. */
if (geo->gf_len == 14)
return false;
if (geo->ecc_strength > MXS_ECC_STRENGTH_MAX)
return false;
} else if (GPMI_IS_MX6Q(this)) {
if (geo->ecc_strength > MX6_ECC_STRENGTH_MAX)
return false;
}
return true;
return geo->ecc_strength <= this->devdata->bch_max_ecc_strength;
}
/*
......@@ -270,8 +288,7 @@ static int legacy_set_geometry(struct gpmi_nand_data *this)
"We can not support this nand chip."
" Its required ecc strength(%d) is beyond our"
" capability(%d).\n", geo->ecc_strength,
(GPMI_IS_MX6Q(this) ? MX6_ECC_STRENGTH_MAX
: MXS_ECC_STRENGTH_MAX));
this->devdata->bch_max_ecc_strength);
return -EINVAL;
}
......@@ -572,7 +589,7 @@ static int gpmi_get_clks(struct gpmi_nand_data *this)
}
/* Get extra clocks */
if (GPMI_IS_MX6Q(this))
if (GPMI_IS_MX6(this))
extra_clks = extra_clks_for_mx6q;
if (!extra_clks)
return 0;
......@@ -590,9 +607,9 @@ static int gpmi_get_clks(struct gpmi_nand_data *this)
r->clock[i] = clk;
}
if (GPMI_IS_MX6Q(this))
if (GPMI_IS_MX6(this))
/*
* Set the default value for the gpmi clock in mx6q:
* Set the default value for the gpmi clock.
*
* If you want to use the ONFI nand which is in the
* Synchronous Mode, you should change the clock as you need.
......@@ -1655,7 +1672,7 @@ static int gpmi_init_last(struct gpmi_nand_data *this)
* (1) the chip is imx6, and
* (2) the size of the ECC parity is byte aligned.
*/
if (GPMI_IS_MX6Q(this) &&
if (GPMI_IS_MX6(this) &&
((bch_geo->gf_len * bch_geo->ecc_strength) % 8) == 0) {
ecc->read_subpage = gpmi_ecc_read_subpage;
chip->options |= NAND_SUBPAGE_READ;
......@@ -1711,7 +1728,7 @@ static int gpmi_nand_init(struct gpmi_nand_data *this)
if (ret)
goto err_out;
ret = nand_scan_ident(mtd, GPMI_IS_MX6Q(this) ? 2 : 1, NULL);
ret = nand_scan_ident(mtd, GPMI_IS_MX6(this) ? 2 : 1, NULL);
if (ret)
goto err_out;
......@@ -1740,23 +1757,19 @@ static int gpmi_nand_init(struct gpmi_nand_data *this)
return ret;
}
static const struct platform_device_id gpmi_ids[] = {
{ .name = "imx23-gpmi-nand", .driver_data = IS_MX23, },
{ .name = "imx28-gpmi-nand", .driver_data = IS_MX28, },
{ .name = "imx6q-gpmi-nand", .driver_data = IS_MX6Q, },
{}
};
static const struct of_device_id gpmi_nand_id_table[] = {
{
.compatible = "fsl,imx23-gpmi-nand",
.data = (void *)&gpmi_ids[IS_MX23],
.data = (void *)&gpmi_devdata_imx23,
}, {
.compatible = "fsl,imx28-gpmi-nand",
.data = (void *)&gpmi_ids[IS_MX28],
.data = (void *)&gpmi_devdata_imx28,
}, {
.compatible = "fsl,imx6q-gpmi-nand",
.data = (void *)&gpmi_ids[IS_MX6Q],
.data = (void *)&gpmi_devdata_imx6q,
}, {
.compatible = "fsl,imx6sx-gpmi-nand",
.data = (void *)&gpmi_devdata_imx6sx,
}, {}
};
MODULE_DEVICE_TABLE(of, gpmi_nand_id_table);
......@@ -1767,18 +1780,18 @@ static int gpmi_nand_probe(struct platform_device *pdev)
const struct of_device_id *of_id;
int ret;
this = devm_kzalloc(&pdev->dev, sizeof(*this), GFP_KERNEL);
if (!this)
return -ENOMEM;
of_id = of_match_device(gpmi_nand_id_table, &pdev->dev);
if (of_id) {
pdev->id_entry = of_id->data;
this->devdata = of_id->data;
} else {
dev_err(&pdev->dev, "Failed to find the right device id.\n");
return -ENODEV;
}
this = devm_kzalloc(&pdev->dev, sizeof(*this), GFP_KERNEL);
if (!this)
return -ENOMEM;
platform_set_drvdata(pdev, this);
this->pdev = pdev;
this->dev = &pdev->dev;
......@@ -1823,7 +1836,6 @@ static struct platform_driver gpmi_nand_driver = {
},
.probe = gpmi_nand_probe,
.remove = gpmi_nand_remove,
.id_table = gpmi_ids,
};
module_platform_driver(gpmi_nand_driver);
......
......@@ -119,11 +119,25 @@ struct nand_timing {
int8_t tRHOH_in_ns;
};
enum gpmi_type {
IS_MX23,
IS_MX28,
IS_MX6Q,
IS_MX6SX
};
struct gpmi_devdata {
enum gpmi_type type;
int bch_max_ecc_strength;
int max_chain_delay; /* See the async EDO mode */
};
struct gpmi_nand_data {
/* flags */
#define GPMI_ASYNC_EDO_ENABLED (1 << 0)
#define GPMI_TIMING_INIT_OK (1 << 1)
int flags;
const struct gpmi_devdata *devdata;
/* System Interface */
struct device *dev;
......@@ -281,15 +295,11 @@ extern int gpmi_read_page(struct gpmi_nand_data *,
#define STATUS_ERASED 0xff
#define STATUS_UNCORRECTABLE 0xfe
/* BCH's bit correction capability. */
#define MXS_ECC_STRENGTH_MAX 20 /* mx23 and mx28 */
#define MX6_ECC_STRENGTH_MAX 40
/* Use the platform_id to distinguish different Archs. */
#define IS_MX23 0x0
#define IS_MX28 0x1
#define IS_MX6Q 0x2
#define GPMI_IS_MX23(x) ((x)->pdev->id_entry->driver_data == IS_MX23)
#define GPMI_IS_MX28(x) ((x)->pdev->id_entry->driver_data == IS_MX28)
#define GPMI_IS_MX6Q(x) ((x)->pdev->id_entry->driver_data == IS_MX6Q)
/* Use the devdata to distinguish different Archs. */
#define GPMI_IS_MX23(x) ((x)->devdata->type == IS_MX23)
#define GPMI_IS_MX28(x) ((x)->devdata->type == IS_MX28)
#define GPMI_IS_MX6Q(x) ((x)->devdata->type == IS_MX6Q)
#define GPMI_IS_MX6SX(x) ((x)->devdata->type == IS_MX6SX)
#define GPMI_IS_MX6(x) (GPMI_IS_MX6Q(x) || GPMI_IS_MX6SX(x))
#endif
......@@ -37,6 +37,7 @@
#include <linux/err.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/mm.h>
#include <linux/types.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/nand.h>
......@@ -1204,8 +1205,7 @@ static int nand_read_subpage(struct mtd_info *mtd, struct nand_chip *chip,
* ecc.pos. Let's make sure that there are no gaps in ECC positions.
*/
for (i = 0; i < eccfrag_len - 1; i++) {
if (eccpos[i + start_step * chip->ecc.bytes] + 1 !=
eccpos[i + start_step * chip->ecc.bytes + 1]) {
if (eccpos[i + index] + 1 != eccpos[i + index + 1]) {
gaps = 1;
break;
}
......@@ -1501,6 +1501,7 @@ static int nand_do_read_ops(struct mtd_info *mtd, loff_t from,
mtd->oobavail : mtd->oobsize;
uint8_t *bufpoi, *oob, *buf;
int use_bufpoi;
unsigned int max_bitflips = 0;
int retry_mode = 0;
bool ecc_fail = false;
......@@ -1523,9 +1524,20 @@ static int nand_do_read_ops(struct mtd_info *mtd, loff_t from,
bytes = min(mtd->writesize - col, readlen);
aligned = (bytes == mtd->writesize);
if (!aligned)
use_bufpoi = 1;
else if (chip->options & NAND_USE_BOUNCE_BUFFER)
use_bufpoi = !virt_addr_valid(buf);
else
use_bufpoi = 0;
/* Is the current page in the buffer? */
if (realpage != chip->pagebuf || oob) {
bufpoi = aligned ? buf : chip->buffers->databuf;
bufpoi = use_bufpoi ? chip->buffers->databuf : buf;
if (use_bufpoi && aligned)
pr_debug("%s: using read bounce buffer for buf@%p\n",
__func__, buf);
read_retry:
chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page);
......@@ -1547,7 +1559,7 @@ static int nand_do_read_ops(struct mtd_info *mtd, loff_t from,
ret = chip->ecc.read_page(mtd, chip, bufpoi,
oob_required, page);
if (ret < 0) {
if (!aligned)
if (use_bufpoi)
/* Invalidate page cache */
chip->pagebuf = -1;
break;
......@@ -1556,7 +1568,7 @@ static int nand_do_read_ops(struct mtd_info *mtd, loff_t from,
max_bitflips = max_t(unsigned int, max_bitflips, ret);
/* Transfer not aligned data */
if (!aligned) {
if (use_bufpoi) {
if (!NAND_HAS_SUBPAGE_READ(chip) && !oob &&
!(mtd->ecc_stats.failed - ecc_failures) &&
(ops->mode != MTD_OPS_RAW)) {
......@@ -2376,11 +2388,23 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to,
int bytes = mtd->writesize;
int cached = writelen > bytes && page != blockmask;
uint8_t *wbuf = buf;
int use_bufpoi;
int part_pagewr = (column || writelen < (mtd->writesize - 1));
/* Partial page write? */
if (unlikely(column || writelen < (mtd->writesize - 1))) {
if (part_pagewr)
use_bufpoi = 1;
else if (chip->options & NAND_USE_BOUNCE_BUFFER)
use_bufpoi = !virt_addr_valid(buf);
else
use_bufpoi = 0;
/* Partial page write?, or need to use bounce buffer */
if (use_bufpoi) {
pr_debug("%s: using write bounce buffer for buf@%p\n",
__func__, buf);
cached = 0;
bytes = min_t(int, bytes - column, (int) writelen);
if (part_pagewr)
bytes = min_t(int, bytes - column, writelen);
chip->pagebuf = -1;
memset(chip->buffers->databuf, 0xff, mtd->writesize);
memcpy(&chip->buffers->databuf[column], buf, bytes);
......@@ -2618,18 +2642,20 @@ static int nand_write_oob(struct mtd_info *mtd, loff_t to,
}
/**
* single_erase_cmd - [GENERIC] NAND standard block erase command function
* single_erase - [GENERIC] NAND standard block erase command function
* @mtd: MTD device structure
* @page: the page address of the block which will be erased
*
* Standard erase command for NAND chips.
* Standard erase command for NAND chips. Returns NAND status.
*/
static void single_erase_cmd(struct mtd_info *mtd, int page)
static int single_erase(struct mtd_info *mtd, int page)
{
struct nand_chip *chip = mtd->priv;
/* Send commands to erase a block */
chip->cmdfunc(mtd, NAND_CMD_ERASE1, -1, page);
chip->cmdfunc(mtd, NAND_CMD_ERASE2, -1, -1);
return chip->waitfunc(mtd, chip);
}
/**
......@@ -2710,9 +2736,7 @@ int nand_erase_nand(struct mtd_info *mtd, struct erase_info *instr,
(page + pages_per_block))
chip->pagebuf = -1;
chip->erase_cmd(mtd, page & chip->pagemask);
status = chip->waitfunc(mtd, chip);
status = chip->erase(mtd, page & chip->pagemask);
/*
* See if operation failed and additional status checks are
......@@ -3607,7 +3631,7 @@ static struct nand_flash_dev *nand_get_flash_type(struct mtd_info *mtd,
chip->onfi_version = 0;
if (!type->name || !type->pagesize) {
/* Check is chip is ONFI compliant */
/* Check if the chip is ONFI compliant */
if (nand_flash_detect_onfi(mtd, chip, &busw))
goto ident_done;
......@@ -3685,7 +3709,7 @@ static struct nand_flash_dev *nand_get_flash_type(struct mtd_info *mtd,
}
chip->badblockbits = 8;
chip->erase_cmd = single_erase_cmd;
chip->erase = single_erase;
/* Do not replace user supplied command function! */
if (mtd->writesize > 512 && chip->cmdfunc == nand_command)
......@@ -3770,6 +3794,39 @@ int nand_scan_ident(struct mtd_info *mtd, int maxchips,
}
EXPORT_SYMBOL(nand_scan_ident);
/*
* Check if the chip configuration meet the datasheet requirements.
* If our configuration corrects A bits per B bytes and the minimum
* required correction level is X bits per Y bytes, then we must ensure
* both of the following are true:
*
* (1) A / B >= X / Y
* (2) A >= X
*
* Requirement (1) ensures we can correct for the required bitflip density.
* Requirement (2) ensures we can correct even when all bitflips are clumped
* in the same sector.
*/
static bool nand_ecc_strength_good(struct mtd_info *mtd)
{
struct nand_chip *chip = mtd->priv;
struct nand_ecc_ctrl *ecc = &chip->ecc;
int corr, ds_corr;
if (ecc->size == 0 || chip->ecc_step_ds == 0)
/* Not enough information */
return true;
/*
* We get the number of corrected bits per page to compare
* the correction density.
*/
corr = (mtd->writesize * ecc->strength) / ecc->size;
ds_corr = (mtd->writesize * chip->ecc_strength_ds) / chip->ecc_step_ds;
return corr >= ds_corr && ecc->strength >= chip->ecc_strength_ds;
}
/**
* nand_scan_tail - [NAND Interface] Scan for the NAND device
......@@ -3990,6 +4047,9 @@ int nand_scan_tail(struct mtd_info *mtd)
ecc->layout->oobavail += ecc->layout->oobfree[i].length;
mtd->oobavail = ecc->layout->oobavail;
/* ECC sanity check: warn noisily if it's too weak */
WARN_ON(!nand_ecc_strength_good(mtd));
/*
* Set the number of read / write steps for one page depending on ECC
* mode.
......@@ -4023,8 +4083,16 @@ int nand_scan_tail(struct mtd_info *mtd)
chip->pagebuf = -1;
/* Large page NAND with SOFT_ECC should support subpage reads */
if ((ecc->mode == NAND_ECC_SOFT) && (chip->page_shift > 9))
switch (ecc->mode) {
case NAND_ECC_SOFT:
case NAND_ECC_SOFT_BCH:
if (chip->page_shift > 9)
chip->options |= NAND_SUBPAGE_READ;
break;
default:
break;
}
/* Fill in remaining MTD driver data */
mtd->type = nand_is_slc(chip) ? MTD_NANDFLASH : MTD_MLCNANDFLASH;
......
......@@ -528,7 +528,7 @@ static int search_bbt(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr
{
struct nand_chip *this = mtd->priv;
int i, chips;
int bits, startblock, block, dir;
int startblock, block, dir;
int scanlen = mtd->writesize + mtd->oobsize;
int bbtblocks;
int blocktopage = this->bbt_erase_shift - this->page_shift;
......@@ -552,9 +552,6 @@ static int search_bbt(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr
bbtblocks = mtd->size >> this->bbt_erase_shift;
}
/* Number of bits for each erase block in the bbt */
bits = td->options & NAND_BBT_NRBITS_MSK;
for (i = 0; i < chips; i++) {
/* Reset version information */
td->version[i] = 0;
......@@ -1285,6 +1282,7 @@ static int nand_create_badblock_pattern(struct nand_chip *this)
int nand_default_bbt(struct mtd_info *mtd)
{
struct nand_chip *this = mtd->priv;
int ret;
/* Is a flash based bad block table requested? */
if (this->bbt_options & NAND_BBT_USE_FLASH) {
......@@ -1303,8 +1301,11 @@ int nand_default_bbt(struct mtd_info *mtd)
this->bbt_md = NULL;
}
if (!this->badblock_pattern)
nand_create_badblock_pattern(this);
if (!this->badblock_pattern) {
ret = nand_create_badblock_pattern(this);
if (ret)
return ret;
}
return nand_scan_bbt(mtd, this->badblock_pattern);
}
......
......@@ -506,7 +506,7 @@ int __nand_correct_data(unsigned char *buf,
if ((bitsperbyte[b0] + bitsperbyte[b1] + bitsperbyte[b2]) == 1)
return 1; /* error in ECC data; no action needed */
pr_err("%s: uncorrectable ECC error", __func__);
pr_err("%s: uncorrectable ECC error\n", __func__);
return -1;
}
EXPORT_SYMBOL(__nand_correct_data);
......
......@@ -137,6 +137,10 @@
#define BADBLOCK_MARKER_LENGTH 2
#ifdef CONFIG_MTD_NAND_OMAP_BCH
static u_char bch16_vector[] = {0xf5, 0x24, 0x1c, 0xd0, 0x61, 0xb3, 0xf1, 0x55,
0x2e, 0x2c, 0x86, 0xa3, 0xed, 0x36, 0x1b, 0x78,
0x48, 0x76, 0xa9, 0x3b, 0x97, 0xd1, 0x7a, 0x93,
0x07, 0x0e};
static u_char bch8_vector[] = {0xf3, 0xdb, 0x14, 0x16, 0x8b, 0xd2, 0xbe, 0xcc,
0xac, 0x6b, 0xff, 0x99, 0x7b};
static u_char bch4_vector[] = {0x00, 0x6b, 0x31, 0xdd, 0x41, 0xbc, 0x10};
......@@ -1114,6 +1118,19 @@ static void __maybe_unused omap_enable_hwecc_bch(struct mtd_info *mtd, int mode)
ecc_size1 = BCH_ECC_SIZE1;
}
break;
case OMAP_ECC_BCH16_CODE_HW:
bch_type = 0x2;
nsectors = chip->ecc.steps;
if (mode == NAND_ECC_READ) {
wr_mode = 0x01;
ecc_size0 = 52; /* ECC bits in nibbles per sector */
ecc_size1 = 0; /* non-ECC bits in nibbles per sector */
} else {
wr_mode = 0x01;
ecc_size0 = 0; /* extra bits in nibbles per sector */
ecc_size1 = 52; /* OOB bits in nibbles per sector */
}
break;
default:
return;
}
......@@ -1162,7 +1179,8 @@ static int __maybe_unused omap_calculate_ecc_bch(struct mtd_info *mtd,
struct gpmc_nand_regs *gpmc_regs = &info->reg;
u8 *ecc_code;
unsigned long nsectors, bch_val1, bch_val2, bch_val3, bch_val4;
int i;
u32 val;
int i, j;
nsectors = ((readl(info->reg.gpmc_ecc_config) >> 4) & 0x7) + 1;
for (i = 0; i < nsectors; i++) {
......@@ -1201,6 +1219,41 @@ static int __maybe_unused omap_calculate_ecc_bch(struct mtd_info *mtd,
*ecc_code++ = ((bch_val1 >> 4) & 0xFF);
*ecc_code++ = ((bch_val1 & 0xF) << 4);
break;
case OMAP_ECC_BCH16_CODE_HW:
val = readl(gpmc_regs->gpmc_bch_result6[i]);
ecc_code[0] = ((val >> 8) & 0xFF);
ecc_code[1] = ((val >> 0) & 0xFF);
val = readl(gpmc_regs->gpmc_bch_result5[i]);
ecc_code[2] = ((val >> 24) & 0xFF);
ecc_code[3] = ((val >> 16) & 0xFF);
ecc_code[4] = ((val >> 8) & 0xFF);
ecc_code[5] = ((val >> 0) & 0xFF);
val = readl(gpmc_regs->gpmc_bch_result4[i]);
ecc_code[6] = ((val >> 24) & 0xFF);
ecc_code[7] = ((val >> 16) & 0xFF);
ecc_code[8] = ((val >> 8) & 0xFF);
ecc_code[9] = ((val >> 0) & 0xFF);
val = readl(gpmc_regs->gpmc_bch_result3[i]);
ecc_code[10] = ((val >> 24) & 0xFF);
ecc_code[11] = ((val >> 16) & 0xFF);
ecc_code[12] = ((val >> 8) & 0xFF);
ecc_code[13] = ((val >> 0) & 0xFF);
val = readl(gpmc_regs->gpmc_bch_result2[i]);
ecc_code[14] = ((val >> 24) & 0xFF);
ecc_code[15] = ((val >> 16) & 0xFF);
ecc_code[16] = ((val >> 8) & 0xFF);
ecc_code[17] = ((val >> 0) & 0xFF);
val = readl(gpmc_regs->gpmc_bch_result1[i]);
ecc_code[18] = ((val >> 24) & 0xFF);
ecc_code[19] = ((val >> 16) & 0xFF);
ecc_code[20] = ((val >> 8) & 0xFF);
ecc_code[21] = ((val >> 0) & 0xFF);
val = readl(gpmc_regs->gpmc_bch_result0[i]);
ecc_code[22] = ((val >> 24) & 0xFF);
ecc_code[23] = ((val >> 16) & 0xFF);
ecc_code[24] = ((val >> 8) & 0xFF);
ecc_code[25] = ((val >> 0) & 0xFF);
break;
default:
return -EINVAL;
}
......@@ -1210,8 +1263,8 @@ static int __maybe_unused omap_calculate_ecc_bch(struct mtd_info *mtd,
case OMAP_ECC_BCH4_CODE_HW_DETECTION_SW:
/* Add constant polynomial to remainder, so that
* ECC of blank pages results in 0x0 on reading back */
for (i = 0; i < eccbytes; i++)
ecc_calc[i] ^= bch4_polynomial[i];
for (j = 0; j < eccbytes; j++)
ecc_calc[j] ^= bch4_polynomial[j];
break;
case OMAP_ECC_BCH4_CODE_HW:
/* Set 8th ECC byte as 0x0 for ROM compatibility */
......@@ -1220,13 +1273,15 @@ static int __maybe_unused omap_calculate_ecc_bch(struct mtd_info *mtd,
case OMAP_ECC_BCH8_CODE_HW_DETECTION_SW:
/* Add constant polynomial to remainder, so that
* ECC of blank pages results in 0x0 on reading back */
for (i = 0; i < eccbytes; i++)
ecc_calc[i] ^= bch8_polynomial[i];
for (j = 0; j < eccbytes; j++)
ecc_calc[j] ^= bch8_polynomial[j];
break;
case OMAP_ECC_BCH8_CODE_HW:
/* Set 14th ECC byte as 0x0 for ROM compatibility */
ecc_calc[eccbytes - 1] = 0x0;
break;
case OMAP_ECC_BCH16_CODE_HW:
break;
default:
return -EINVAL;
}
......@@ -1237,6 +1292,7 @@ static int __maybe_unused omap_calculate_ecc_bch(struct mtd_info *mtd,
return 0;
}
#ifdef CONFIG_MTD_NAND_OMAP_BCH
/**
* erased_sector_bitflips - count bit flips
* @data: data sector buffer
......@@ -1276,7 +1332,6 @@ static int erased_sector_bitflips(u_char *data, u_char *oob,
return flip_bits;
}
#ifdef CONFIG_MTD_NAND_OMAP_BCH
/**
* omap_elm_correct_data - corrects page data area in case error reported
* @mtd: MTD device structure
......@@ -1318,6 +1373,10 @@ static int omap_elm_correct_data(struct mtd_info *mtd, u_char *data,
actual_eccbytes = ecc->bytes - 1;
erased_ecc_vec = bch8_vector;
break;
case OMAP_ECC_BCH16_CODE_HW:
actual_eccbytes = ecc->bytes;
erased_ecc_vec = bch16_vector;
break;
default:
pr_err("invalid driver configuration\n");
return -EINVAL;
......@@ -1382,7 +1441,7 @@ static int omap_elm_correct_data(struct mtd_info *mtd, u_char *data,
/* Check if any error reported */
if (!is_error_reported)
return 0;
return stat;
/* Decode BCH error using ELM module */
elm_decode_bch_error_page(info->elm_dev, ecc_vec, err_vec);
......@@ -1401,6 +1460,7 @@ static int omap_elm_correct_data(struct mtd_info *mtd, u_char *data,
BCH4_BIT_PAD;
break;
case OMAP_ECC_BCH8_CODE_HW:
case OMAP_ECC_BCH16_CODE_HW:
pos = err_vec[i].error_loc[j];
break;
default:
......@@ -1912,6 +1972,40 @@ static int omap_nand_probe(struct platform_device *pdev)
goto return_error;
#endif
case OMAP_ECC_BCH16_CODE_HW:
#ifdef CONFIG_MTD_NAND_OMAP_BCH
pr_info("using OMAP_ECC_BCH16_CODE_HW ECC scheme\n");
nand_chip->ecc.mode = NAND_ECC_HW;
nand_chip->ecc.size = 512;
nand_chip->ecc.bytes = 26;
nand_chip->ecc.strength = 16;
nand_chip->ecc.hwctl = omap_enable_hwecc_bch;
nand_chip->ecc.correct = omap_elm_correct_data;
nand_chip->ecc.calculate = omap_calculate_ecc_bch;
nand_chip->ecc.read_page = omap_read_page_bch;
nand_chip->ecc.write_page = omap_write_page_bch;
/* This ECC scheme requires ELM H/W block */
err = is_elm_present(info, pdata->elm_of_node, BCH16_ECC);
if (err < 0) {
pr_err("ELM is required for this ECC scheme\n");
goto return_error;
}
/* define ECC layout */
ecclayout->eccbytes = nand_chip->ecc.bytes *
(mtd->writesize /
nand_chip->ecc.size);
oob_index = BADBLOCK_MARKER_LENGTH;
for (i = 0; i < ecclayout->eccbytes; i++, oob_index++)
ecclayout->eccpos[i] = oob_index;
/* reserved marker already included in ecclayout->eccbytes */
ecclayout->oobfree->offset =
ecclayout->eccpos[ecclayout->eccbytes - 1] + 1;
break;
#else
pr_err("nand: error: CONFIG_MTD_NAND_OMAP_BCH not enabled\n");
err = -EINVAL;
goto return_error;
#endif
default:
pr_err("nand: error: invalid or unsupported ECC scheme\n");
err = -EINVAL;
......
......@@ -214,7 +214,7 @@ static int orion_nand_remove(struct platform_device *pdev)
}
#ifdef CONFIG_OF
static struct of_device_id orion_nand_of_match_table[] = {
static const struct of_device_id orion_nand_of_match_table[] = {
{ .compatible = "marvell,orion-nand", },
{},
};
......
......@@ -127,10 +127,10 @@
/* macros for registers read/write */
#define nand_writel(info, off, val) \
__raw_writel((val), (info)->mmio_base + (off))
writel_relaxed((val), (info)->mmio_base + (off))
#define nand_readl(info, off) \
__raw_readl((info)->mmio_base + (off))
readl_relaxed((info)->mmio_base + (off))
/* error code and state */
enum {
......@@ -337,7 +337,7 @@ static struct nand_ecclayout ecc_layout_4KB_bch8bit = {
/* convert nano-seconds to nand flash controller clock cycles */
#define ns2cycle(ns, clk) (int)((ns) * (clk / 1000000) / 1000)
static struct of_device_id pxa3xx_nand_dt_ids[] = {
static const struct of_device_id pxa3xx_nand_dt_ids[] = {
{
.compatible = "marvell,pxa3xx-nand",
.data = (void *)PXA3XX_NAND_VARIANT_PXA,
......@@ -1354,7 +1354,6 @@ static int pxa_ecc_init(struct pxa3xx_nand_info *info,
ecc->mode = NAND_ECC_HW;
ecc->size = 512;
ecc->strength = 1;
return 1;
} else if (strength == 1 && ecc_stepsize == 512 && page_size == 512) {
info->chunk_size = 512;
......@@ -1363,7 +1362,6 @@ static int pxa_ecc_init(struct pxa3xx_nand_info *info,
ecc->mode = NAND_ECC_HW;
ecc->size = 512;
ecc->strength = 1;
return 1;
/*
* Required ECC: 4-bit correction per 512 bytes
......@@ -1378,7 +1376,6 @@ static int pxa_ecc_init(struct pxa3xx_nand_info *info,
ecc->size = info->chunk_size;
ecc->layout = &ecc_layout_2KB_bch4bit;
ecc->strength = 16;
return 1;
} else if (strength == 4 && ecc_stepsize == 512 && page_size == 4096) {
info->ecc_bch = 1;
......@@ -1389,7 +1386,6 @@ static int pxa_ecc_init(struct pxa3xx_nand_info *info,
ecc->size = info->chunk_size;
ecc->layout = &ecc_layout_4KB_bch4bit;
ecc->strength = 16;
return 1;
/*
* Required ECC: 8-bit correction per 512 bytes
......@@ -1404,8 +1400,15 @@ static int pxa_ecc_init(struct pxa3xx_nand_info *info,
ecc->size = info->chunk_size;
ecc->layout = &ecc_layout_4KB_bch8bit;
ecc->strength = 16;
return 1;
} else {
dev_err(&info->pdev->dev,
"ECC strength %d at page size %d is not supported\n",
strength, page_size);
return -ENODEV;
}
dev_info(&info->pdev->dev, "ECC strength %d, ECC step size %d\n",
ecc->strength, ecc->size);
return 0;
}
......@@ -1516,8 +1519,13 @@ static int pxa3xx_nand_scan(struct mtd_info *mtd)
}
}
if (pdata->ecc_strength && pdata->ecc_step_size) {
ecc_strength = pdata->ecc_strength;
ecc_step = pdata->ecc_step_size;
} else {
ecc_strength = chip->ecc_strength_ds;
ecc_step = chip->ecc_step_ds;
}
/* Set default ECC strength requirements on non-ONFI devices */
if (ecc_strength < 1 && ecc_step < 1) {
......@@ -1527,12 +1535,8 @@ static int pxa3xx_nand_scan(struct mtd_info *mtd)
ret = pxa_ecc_init(info, &chip->ecc, ecc_strength,
ecc_step, mtd->writesize);
if (!ret) {
dev_err(&info->pdev->dev,
"ECC strength %d at page size %d is not supported\n",
ecc_strength, mtd->writesize);
return -ENODEV;
}
if (ret)
return ret;
/* calculate addressing information */
if (mtd->writesize >= 2048)
......@@ -1730,6 +1734,14 @@ static int pxa3xx_nand_probe_dt(struct platform_device *pdev)
of_property_read_u32(np, "num-cs", &pdata->num_cs);
pdata->flash_bbt = of_get_nand_on_flash_bbt(np);
pdata->ecc_strength = of_get_nand_ecc_strength(np);
if (pdata->ecc_strength < 0)
pdata->ecc_strength = 0;
pdata->ecc_step_size = of_get_nand_ecc_step_size(np);
if (pdata->ecc_step_size < 0)
pdata->ecc_step_size = 0;
pdev->dev.platform_data = pdata;
return 0;
......
......@@ -245,7 +245,7 @@ static void r852_write_buf(struct mtd_info *mtd, const uint8_t *buf, int len)
}
/* write DWORD chinks - faster */
while (len) {
while (len >= 4) {
reg = buf[0] | buf[1] << 8 | buf[2] << 16 | buf[3] << 24;
r852_write_reg_dword(dev, R852_DATALINE, reg);
buf += 4;
......@@ -254,8 +254,10 @@ static void r852_write_buf(struct mtd_info *mtd, const uint8_t *buf, int len)
}
/* write rest */
while (len)
while (len > 0) {
r852_write_reg(dev, R852_DATALINE, *buf++);
len--;
}
}
/*
......
......@@ -537,9 +537,9 @@ static int onenand_write_bufferram(struct mtd_info *mtd, int area,
return 0;
}
static int (*s5pc110_dma_ops)(void *dst, void *src, size_t count, int direction);
static int (*s5pc110_dma_ops)(dma_addr_t dst, dma_addr_t src, size_t count, int direction);
static int s5pc110_dma_poll(void *dst, void *src, size_t count, int direction)
static int s5pc110_dma_poll(dma_addr_t dst, dma_addr_t src, size_t count, int direction)
{
void __iomem *base = onenand->dma_addr;
int status;
......@@ -605,7 +605,7 @@ static irqreturn_t s5pc110_onenand_irq(int irq, void *data)
return IRQ_HANDLED;
}
static int s5pc110_dma_irq(void *dst, void *src, size_t count, int direction)
static int s5pc110_dma_irq(dma_addr_t dst, dma_addr_t src, size_t count, int direction)
{
void __iomem *base = onenand->dma_addr;
int status;
......@@ -686,7 +686,7 @@ static int s5pc110_read_bufferram(struct mtd_info *mtd, int area,
dev_err(dev, "Couldn't map a %d byte buffer for DMA\n", count);
goto normal;
}
err = s5pc110_dma_ops((void *) dma_dst, (void *) dma_src,
err = s5pc110_dma_ops(dma_dst, dma_src,
count, S5PC110_DMA_DIR_READ);
if (page_dma)
......
menuconfig MTD_SPI_NOR
tristate "SPI-NOR device support"
depends on MTD
help
This is the framework for the SPI NOR which can be used by the SPI
device drivers and the SPI-NOR device driver.
if MTD_SPI_NOR
config SPI_FSL_QUADSPI
tristate "Freescale Quad SPI controller"
depends on ARCH_MXC
help
This enables support for the Quad SPI controller in master mode.
We only connect the NOR to this controller now.
endif # MTD_SPI_NOR
obj-$(CONFIG_MTD_SPI_NOR) += spi-nor.o
obj-$(CONFIG_SPI_FSL_QUADSPI) += fsl-quadspi.o
/*
* Freescale QuadSPI driver.
*
* Copyright (C) 2013 Freescale Semiconductor, Inc.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/errno.h>
#include <linux/platform_device.h>
#include <linux/sched.h>
#include <linux/delay.h>
#include <linux/io.h>
#include <linux/clk.h>
#include <linux/err.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/timer.h>
#include <linux/jiffies.h>
#include <linux/completion.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
#include <linux/mtd/spi-nor.h>
/* The registers */
#define QUADSPI_MCR 0x00
#define QUADSPI_MCR_RESERVED_SHIFT 16
#define QUADSPI_MCR_RESERVED_MASK (0xF << QUADSPI_MCR_RESERVED_SHIFT)
#define QUADSPI_MCR_MDIS_SHIFT 14
#define QUADSPI_MCR_MDIS_MASK (1 << QUADSPI_MCR_MDIS_SHIFT)
#define QUADSPI_MCR_CLR_TXF_SHIFT 11
#define QUADSPI_MCR_CLR_TXF_MASK (1 << QUADSPI_MCR_CLR_TXF_SHIFT)
#define QUADSPI_MCR_CLR_RXF_SHIFT 10
#define QUADSPI_MCR_CLR_RXF_MASK (1 << QUADSPI_MCR_CLR_RXF_SHIFT)
#define QUADSPI_MCR_DDR_EN_SHIFT 7
#define QUADSPI_MCR_DDR_EN_MASK (1 << QUADSPI_MCR_DDR_EN_SHIFT)
#define QUADSPI_MCR_END_CFG_SHIFT 2
#define QUADSPI_MCR_END_CFG_MASK (3 << QUADSPI_MCR_END_CFG_SHIFT)
#define QUADSPI_MCR_SWRSTHD_SHIFT 1
#define QUADSPI_MCR_SWRSTHD_MASK (1 << QUADSPI_MCR_SWRSTHD_SHIFT)
#define QUADSPI_MCR_SWRSTSD_SHIFT 0
#define QUADSPI_MCR_SWRSTSD_MASK (1 << QUADSPI_MCR_SWRSTSD_SHIFT)
#define QUADSPI_IPCR 0x08
#define QUADSPI_IPCR_SEQID_SHIFT 24
#define QUADSPI_IPCR_SEQID_MASK (0xF << QUADSPI_IPCR_SEQID_SHIFT)
#define QUADSPI_BUF0CR 0x10
#define QUADSPI_BUF1CR 0x14
#define QUADSPI_BUF2CR 0x18
#define QUADSPI_BUFXCR_INVALID_MSTRID 0xe
#define QUADSPI_BUF3CR 0x1c
#define QUADSPI_BUF3CR_ALLMST_SHIFT 31
#define QUADSPI_BUF3CR_ALLMST (1 << QUADSPI_BUF3CR_ALLMST_SHIFT)
#define QUADSPI_BFGENCR 0x20
#define QUADSPI_BFGENCR_PAR_EN_SHIFT 16
#define QUADSPI_BFGENCR_PAR_EN_MASK (1 << (QUADSPI_BFGENCR_PAR_EN_SHIFT))
#define QUADSPI_BFGENCR_SEQID_SHIFT 12
#define QUADSPI_BFGENCR_SEQID_MASK (0xF << QUADSPI_BFGENCR_SEQID_SHIFT)
#define QUADSPI_BUF0IND 0x30
#define QUADSPI_BUF1IND 0x34
#define QUADSPI_BUF2IND 0x38
#define QUADSPI_SFAR 0x100
#define QUADSPI_SMPR 0x108
#define QUADSPI_SMPR_DDRSMP_SHIFT 16
#define QUADSPI_SMPR_DDRSMP_MASK (7 << QUADSPI_SMPR_DDRSMP_SHIFT)
#define QUADSPI_SMPR_FSDLY_SHIFT 6
#define QUADSPI_SMPR_FSDLY_MASK (1 << QUADSPI_SMPR_FSDLY_SHIFT)
#define QUADSPI_SMPR_FSPHS_SHIFT 5
#define QUADSPI_SMPR_FSPHS_MASK (1 << QUADSPI_SMPR_FSPHS_SHIFT)
#define QUADSPI_SMPR_HSENA_SHIFT 0
#define QUADSPI_SMPR_HSENA_MASK (1 << QUADSPI_SMPR_HSENA_SHIFT)
#define QUADSPI_RBSR 0x10c
#define QUADSPI_RBSR_RDBFL_SHIFT 8
#define QUADSPI_RBSR_RDBFL_MASK (0x3F << QUADSPI_RBSR_RDBFL_SHIFT)
#define QUADSPI_RBCT 0x110
#define QUADSPI_RBCT_WMRK_MASK 0x1F
#define QUADSPI_RBCT_RXBRD_SHIFT 8
#define QUADSPI_RBCT_RXBRD_USEIPS (0x1 << QUADSPI_RBCT_RXBRD_SHIFT)
#define QUADSPI_TBSR 0x150
#define QUADSPI_TBDR 0x154
#define QUADSPI_SR 0x15c
#define QUADSPI_SR_IP_ACC_SHIFT 1
#define QUADSPI_SR_IP_ACC_MASK (0x1 << QUADSPI_SR_IP_ACC_SHIFT)
#define QUADSPI_SR_AHB_ACC_SHIFT 2
#define QUADSPI_SR_AHB_ACC_MASK (0x1 << QUADSPI_SR_AHB_ACC_SHIFT)
#define QUADSPI_FR 0x160
#define QUADSPI_FR_TFF_MASK 0x1
#define QUADSPI_SFA1AD 0x180
#define QUADSPI_SFA2AD 0x184
#define QUADSPI_SFB1AD 0x188
#define QUADSPI_SFB2AD 0x18c
#define QUADSPI_RBDR 0x200
#define QUADSPI_LUTKEY 0x300
#define QUADSPI_LUTKEY_VALUE 0x5AF05AF0
#define QUADSPI_LCKCR 0x304
#define QUADSPI_LCKER_LOCK 0x1
#define QUADSPI_LCKER_UNLOCK 0x2
#define QUADSPI_RSER 0x164
#define QUADSPI_RSER_TFIE (0x1 << 0)
#define QUADSPI_LUT_BASE 0x310
/*
* The definition of the LUT register shows below:
*
* ---------------------------------------------------
* | INSTR1 | PAD1 | OPRND1 | INSTR0 | PAD0 | OPRND0 |
* ---------------------------------------------------
*/
#define OPRND0_SHIFT 0
#define PAD0_SHIFT 8
#define INSTR0_SHIFT 10
#define OPRND1_SHIFT 16
/* Instruction set for the LUT register. */
#define LUT_STOP 0
#define LUT_CMD 1
#define LUT_ADDR 2
#define LUT_DUMMY 3
#define LUT_MODE 4
#define LUT_MODE2 5
#define LUT_MODE4 6
#define LUT_READ 7
#define LUT_WRITE 8
#define LUT_JMP_ON_CS 9
#define LUT_ADDR_DDR 10
#define LUT_MODE_DDR 11
#define LUT_MODE2_DDR 12
#define LUT_MODE4_DDR 13
#define LUT_READ_DDR 14
#define LUT_WRITE_DDR 15
#define LUT_DATA_LEARN 16
/*
* The PAD definitions for LUT register.
*
* The pad stands for the lines number of IO[0:3].
* For example, the Quad read need four IO lines, so you should
* set LUT_PAD4 which means we use four IO lines.
*/
#define LUT_PAD1 0
#define LUT_PAD2 1
#define LUT_PAD4 2
/* Oprands for the LUT register. */
#define ADDR24BIT 0x18
#define ADDR32BIT 0x20
/* Macros for constructing the LUT register. */
#define LUT0(ins, pad, opr) \
(((opr) << OPRND0_SHIFT) | ((LUT_##pad) << PAD0_SHIFT) | \
((LUT_##ins) << INSTR0_SHIFT))
#define LUT1(ins, pad, opr) (LUT0(ins, pad, opr) << OPRND1_SHIFT)
/* other macros for LUT register. */
#define QUADSPI_LUT(x) (QUADSPI_LUT_BASE + (x) * 4)
#define QUADSPI_LUT_NUM 64
/* SEQID -- we can have 16 seqids at most. */
#define SEQID_QUAD_READ 0
#define SEQID_WREN 1
#define SEQID_WRDI 2
#define SEQID_RDSR 3
#define SEQID_SE 4
#define SEQID_CHIP_ERASE 5
#define SEQID_PP 6
#define SEQID_RDID 7
#define SEQID_WRSR 8
#define SEQID_RDCR 9
#define SEQID_EN4B 10
#define SEQID_BRWR 11
enum fsl_qspi_devtype {
FSL_QUADSPI_VYBRID,
FSL_QUADSPI_IMX6SX,
};
struct fsl_qspi_devtype_data {
enum fsl_qspi_devtype devtype;
int rxfifo;
int txfifo;
};
static struct fsl_qspi_devtype_data vybrid_data = {
.devtype = FSL_QUADSPI_VYBRID,
.rxfifo = 128,
.txfifo = 64
};
static struct fsl_qspi_devtype_data imx6sx_data = {
.devtype = FSL_QUADSPI_IMX6SX,
.rxfifo = 128,
.txfifo = 512
};
#define FSL_QSPI_MAX_CHIP 4
struct fsl_qspi {
struct mtd_info mtd[FSL_QSPI_MAX_CHIP];
struct spi_nor nor[FSL_QSPI_MAX_CHIP];
void __iomem *iobase;
void __iomem *ahb_base; /* Used when read from AHB bus */
u32 memmap_phy;
struct clk *clk, *clk_en;
struct device *dev;
struct completion c;
struct fsl_qspi_devtype_data *devtype_data;
u32 nor_size;
u32 nor_num;
u32 clk_rate;
unsigned int chip_base_addr; /* We may support two chips. */
};
static inline int is_vybrid_qspi(struct fsl_qspi *q)
{
return q->devtype_data->devtype == FSL_QUADSPI_VYBRID;
}
static inline int is_imx6sx_qspi(struct fsl_qspi *q)
{
return q->devtype_data->devtype == FSL_QUADSPI_IMX6SX;
}
/*
* An IC bug makes us to re-arrange the 32-bit data.
* The following chips, such as IMX6SLX, have fixed this bug.
*/
static inline u32 fsl_qspi_endian_xchg(struct fsl_qspi *q, u32 a)
{
return is_vybrid_qspi(q) ? __swab32(a) : a;
}
static inline void fsl_qspi_unlock_lut(struct fsl_qspi *q)
{
writel(QUADSPI_LUTKEY_VALUE, q->iobase + QUADSPI_LUTKEY);
writel(QUADSPI_LCKER_UNLOCK, q->iobase + QUADSPI_LCKCR);
}
static inline void fsl_qspi_lock_lut(struct fsl_qspi *q)
{
writel(QUADSPI_LUTKEY_VALUE, q->iobase + QUADSPI_LUTKEY);
writel(QUADSPI_LCKER_LOCK, q->iobase + QUADSPI_LCKCR);
}
static irqreturn_t fsl_qspi_irq_handler(int irq, void *dev_id)
{
struct fsl_qspi *q = dev_id;
u32 reg;
/* clear interrupt */
reg = readl(q->iobase + QUADSPI_FR);
writel(reg, q->iobase + QUADSPI_FR);
if (reg & QUADSPI_FR_TFF_MASK)
complete(&q->c);
dev_dbg(q->dev, "QUADSPI_FR : 0x%.8x:0x%.8x\n", q->chip_base_addr, reg);
return IRQ_HANDLED;
}
static void fsl_qspi_init_lut(struct fsl_qspi *q)
{
void __iomem *base = q->iobase;
int rxfifo = q->devtype_data->rxfifo;
u32 lut_base;
u8 cmd, addrlen, dummy;
int i;
fsl_qspi_unlock_lut(q);
/* Clear all the LUT table */
for (i = 0; i < QUADSPI_LUT_NUM; i++)
writel(0, base + QUADSPI_LUT_BASE + i * 4);
/* Quad Read */
lut_base = SEQID_QUAD_READ * 4;
if (q->nor_size <= SZ_16M) {
cmd = SPINOR_OP_READ_1_1_4;
addrlen = ADDR24BIT;
dummy = 8;
} else {
/* use the 4-byte address */
cmd = SPINOR_OP_READ_1_1_4;
addrlen = ADDR32BIT;
dummy = 8;
}
writel(LUT0(CMD, PAD1, cmd) | LUT1(ADDR, PAD1, addrlen),
base + QUADSPI_LUT(lut_base));
writel(LUT0(DUMMY, PAD1, dummy) | LUT1(READ, PAD4, rxfifo),
base + QUADSPI_LUT(lut_base + 1));
/* Write enable */
lut_base = SEQID_WREN * 4;
writel(LUT0(CMD, PAD1, SPINOR_OP_WREN), base + QUADSPI_LUT(lut_base));
/* Page Program */
lut_base = SEQID_PP * 4;
if (q->nor_size <= SZ_16M) {
cmd = SPINOR_OP_PP;
addrlen = ADDR24BIT;
} else {
/* use the 4-byte address */
cmd = SPINOR_OP_PP;
addrlen = ADDR32BIT;
}
writel(LUT0(CMD, PAD1, cmd) | LUT1(ADDR, PAD1, addrlen),
base + QUADSPI_LUT(lut_base));
writel(LUT0(WRITE, PAD1, 0), base + QUADSPI_LUT(lut_base + 1));
/* Read Status */
lut_base = SEQID_RDSR * 4;
writel(LUT0(CMD, PAD1, SPINOR_OP_RDSR) | LUT1(READ, PAD1, 0x1),
base + QUADSPI_LUT(lut_base));
/* Erase a sector */
lut_base = SEQID_SE * 4;
if (q->nor_size <= SZ_16M) {
cmd = SPINOR_OP_SE;
addrlen = ADDR24BIT;
} else {
/* use the 4-byte address */
cmd = SPINOR_OP_SE;
addrlen = ADDR32BIT;
}
writel(LUT0(CMD, PAD1, cmd) | LUT1(ADDR, PAD1, addrlen),
base + QUADSPI_LUT(lut_base));
/* Erase the whole chip */
lut_base = SEQID_CHIP_ERASE * 4;
writel(LUT0(CMD, PAD1, SPINOR_OP_CHIP_ERASE),
base + QUADSPI_LUT(lut_base));
/* READ ID */
lut_base = SEQID_RDID * 4;
writel(LUT0(CMD, PAD1, SPINOR_OP_RDID) | LUT1(READ, PAD1, 0x8),
base + QUADSPI_LUT(lut_base));
/* Write Register */
lut_base = SEQID_WRSR * 4;
writel(LUT0(CMD, PAD1, SPINOR_OP_WRSR) | LUT1(WRITE, PAD1, 0x2),
base + QUADSPI_LUT(lut_base));
/* Read Configuration Register */
lut_base = SEQID_RDCR * 4;
writel(LUT0(CMD, PAD1, SPINOR_OP_RDCR) | LUT1(READ, PAD1, 0x1),
base + QUADSPI_LUT(lut_base));
/* Write disable */
lut_base = SEQID_WRDI * 4;
writel(LUT0(CMD, PAD1, SPINOR_OP_WRDI), base + QUADSPI_LUT(lut_base));
/* Enter 4 Byte Mode (Micron) */
lut_base = SEQID_EN4B * 4;
writel(LUT0(CMD, PAD1, SPINOR_OP_EN4B), base + QUADSPI_LUT(lut_base));
/* Enter 4 Byte Mode (Spansion) */
lut_base = SEQID_BRWR * 4;
writel(LUT0(CMD, PAD1, SPINOR_OP_BRWR), base + QUADSPI_LUT(lut_base));
fsl_qspi_lock_lut(q);
}
/* Get the SEQID for the command */
static int fsl_qspi_get_seqid(struct fsl_qspi *q, u8 cmd)
{
switch (cmd) {
case SPINOR_OP_READ_1_1_4:
return SEQID_QUAD_READ;
case SPINOR_OP_WREN:
return SEQID_WREN;
case SPINOR_OP_WRDI:
return SEQID_WRDI;
case SPINOR_OP_RDSR:
return SEQID_RDSR;
case SPINOR_OP_SE:
return SEQID_SE;
case SPINOR_OP_CHIP_ERASE:
return SEQID_CHIP_ERASE;
case SPINOR_OP_PP:
return SEQID_PP;
case SPINOR_OP_RDID:
return SEQID_RDID;
case SPINOR_OP_WRSR:
return SEQID_WRSR;
case SPINOR_OP_RDCR:
return SEQID_RDCR;
case SPINOR_OP_EN4B:
return SEQID_EN4B;
case SPINOR_OP_BRWR:
return SEQID_BRWR;
default:
dev_err(q->dev, "Unsupported cmd 0x%.2x\n", cmd);
break;
}
return -EINVAL;
}
static int
fsl_qspi_runcmd(struct fsl_qspi *q, u8 cmd, unsigned int addr, int len)
{
void __iomem *base = q->iobase;
int seqid;
u32 reg, reg2;
int err;
init_completion(&q->c);
dev_dbg(q->dev, "to 0x%.8x:0x%.8x, len:%d, cmd:%.2x\n",
q->chip_base_addr, addr, len, cmd);
/* save the reg */
reg = readl(base + QUADSPI_MCR);
writel(q->memmap_phy + q->chip_base_addr + addr, base + QUADSPI_SFAR);
writel(QUADSPI_RBCT_WMRK_MASK | QUADSPI_RBCT_RXBRD_USEIPS,
base + QUADSPI_RBCT);
writel(reg | QUADSPI_MCR_CLR_RXF_MASK, base + QUADSPI_MCR);
do {
reg2 = readl(base + QUADSPI_SR);
if (reg2 & (QUADSPI_SR_IP_ACC_MASK | QUADSPI_SR_AHB_ACC_MASK)) {
udelay(1);
dev_dbg(q->dev, "The controller is busy, 0x%x\n", reg2);
continue;
}
break;
} while (1);
/* trigger the LUT now */
seqid = fsl_qspi_get_seqid(q, cmd);
writel((seqid << QUADSPI_IPCR_SEQID_SHIFT) | len, base + QUADSPI_IPCR);
/* Wait for the interrupt. */
err = wait_for_completion_timeout(&q->c, msecs_to_jiffies(1000));
if (!err) {
dev_err(q->dev,
"cmd 0x%.2x timeout, addr@%.8x, FR:0x%.8x, SR:0x%.8x\n",
cmd, addr, readl(base + QUADSPI_FR),
readl(base + QUADSPI_SR));
err = -ETIMEDOUT;
} else {
err = 0;
}
/* restore the MCR */
writel(reg, base + QUADSPI_MCR);
return err;
}
/* Read out the data from the QUADSPI_RBDR buffer registers. */
static void fsl_qspi_read_data(struct fsl_qspi *q, int len, u8 *rxbuf)
{
u32 tmp;
int i = 0;
while (len > 0) {
tmp = readl(q->iobase + QUADSPI_RBDR + i * 4);
tmp = fsl_qspi_endian_xchg(q, tmp);
dev_dbg(q->dev, "chip addr:0x%.8x, rcv:0x%.8x\n",
q->chip_base_addr, tmp);
if (len >= 4) {
*((u32 *)rxbuf) = tmp;
rxbuf += 4;
} else {
memcpy(rxbuf, &tmp, len);
break;
}
len -= 4;
i++;
}
}
/*
* If we have changed the content of the flash by writing or erasing,
* we need to invalidate the AHB buffer. If we do not do so, we may read out
* the wrong data. The spec tells us reset the AHB domain and Serial Flash
* domain at the same time.
*/
static inline void fsl_qspi_invalid(struct fsl_qspi *q)
{
u32 reg;
reg = readl(q->iobase + QUADSPI_MCR);
reg |= QUADSPI_MCR_SWRSTHD_MASK | QUADSPI_MCR_SWRSTSD_MASK;
writel(reg, q->iobase + QUADSPI_MCR);
/*
* The minimum delay : 1 AHB + 2 SFCK clocks.
* Delay 1 us is enough.
*/
udelay(1);
reg &= ~(QUADSPI_MCR_SWRSTHD_MASK | QUADSPI_MCR_SWRSTSD_MASK);
writel(reg, q->iobase + QUADSPI_MCR);
}
static int fsl_qspi_nor_write(struct fsl_qspi *q, struct spi_nor *nor,
u8 opcode, unsigned int to, u32 *txbuf,
unsigned count, size_t *retlen)
{
int ret, i, j;
u32 tmp;
dev_dbg(q->dev, "to 0x%.8x:0x%.8x, len : %d\n",
q->chip_base_addr, to, count);
/* clear the TX FIFO. */
tmp = readl(q->iobase + QUADSPI_MCR);
writel(tmp | QUADSPI_MCR_CLR_RXF_MASK, q->iobase + QUADSPI_MCR);
/* fill the TX data to the FIFO */
for (j = 0, i = ((count + 3) / 4); j < i; j++) {
tmp = fsl_qspi_endian_xchg(q, *txbuf);
writel(tmp, q->iobase + QUADSPI_TBDR);
txbuf++;
}
/* Trigger it */
ret = fsl_qspi_runcmd(q, opcode, to, count);
if (ret == 0 && retlen)
*retlen += count;
return ret;
}
static void fsl_qspi_set_map_addr(struct fsl_qspi *q)
{
int nor_size = q->nor_size;
void __iomem *base = q->iobase;
writel(nor_size + q->memmap_phy, base + QUADSPI_SFA1AD);
writel(nor_size * 2 + q->memmap_phy, base + QUADSPI_SFA2AD);
writel(nor_size * 3 + q->memmap_phy, base + QUADSPI_SFB1AD);
writel(nor_size * 4 + q->memmap_phy, base + QUADSPI_SFB2AD);
}
/*
* There are two different ways to read out the data from the flash:
* the "IP Command Read" and the "AHB Command Read".
*
* The IC guy suggests we use the "AHB Command Read" which is faster
* then the "IP Command Read". (What's more is that there is a bug in
* the "IP Command Read" in the Vybrid.)
*
* After we set up the registers for the "AHB Command Read", we can use
* the memcpy to read the data directly. A "missed" access to the buffer
* causes the controller to clear the buffer, and use the sequence pointed
* by the QUADSPI_BFGENCR[SEQID] to initiate a read from the flash.
*/
static void fsl_qspi_init_abh_read(struct fsl_qspi *q)
{
void __iomem *base = q->iobase;
int seqid;
/* AHB configuration for access buffer 0/1/2 .*/
writel(QUADSPI_BUFXCR_INVALID_MSTRID, base + QUADSPI_BUF0CR);
writel(QUADSPI_BUFXCR_INVALID_MSTRID, base + QUADSPI_BUF1CR);
writel(QUADSPI_BUFXCR_INVALID_MSTRID, base + QUADSPI_BUF2CR);
writel(QUADSPI_BUF3CR_ALLMST, base + QUADSPI_BUF3CR);
/* We only use the buffer3 */
writel(0, base + QUADSPI_BUF0IND);
writel(0, base + QUADSPI_BUF1IND);
writel(0, base + QUADSPI_BUF2IND);
/* Set the default lut sequence for AHB Read. */
seqid = fsl_qspi_get_seqid(q, q->nor[0].read_opcode);
writel(seqid << QUADSPI_BFGENCR_SEQID_SHIFT,
q->iobase + QUADSPI_BFGENCR);
}
/* We use this function to do some basic init for spi_nor_scan(). */
static int fsl_qspi_nor_setup(struct fsl_qspi *q)
{
void __iomem *base = q->iobase;
u32 reg;
int ret;
/* the default frequency, we will change it in the future.*/
ret = clk_set_rate(q->clk, 66000000);
if (ret)
return ret;
/* Init the LUT table. */
fsl_qspi_init_lut(q);
/* Disable the module */
writel(QUADSPI_MCR_MDIS_MASK | QUADSPI_MCR_RESERVED_MASK,
base + QUADSPI_MCR);
reg = readl(base + QUADSPI_SMPR);
writel(reg & ~(QUADSPI_SMPR_FSDLY_MASK
| QUADSPI_SMPR_FSPHS_MASK
| QUADSPI_SMPR_HSENA_MASK
| QUADSPI_SMPR_DDRSMP_MASK), base + QUADSPI_SMPR);
/* Enable the module */
writel(QUADSPI_MCR_RESERVED_MASK | QUADSPI_MCR_END_CFG_MASK,
base + QUADSPI_MCR);
/* enable the interrupt */
writel(QUADSPI_RSER_TFIE, q->iobase + QUADSPI_RSER);
return 0;
}
static int fsl_qspi_nor_setup_last(struct fsl_qspi *q)
{
unsigned long rate = q->clk_rate;
int ret;
if (is_imx6sx_qspi(q))
rate *= 4;
ret = clk_set_rate(q->clk, rate);
if (ret)
return ret;
/* Init the LUT table again. */
fsl_qspi_init_lut(q);
/* Init for AHB read */
fsl_qspi_init_abh_read(q);
return 0;
}
static struct of_device_id fsl_qspi_dt_ids[] = {
{ .compatible = "fsl,vf610-qspi", .data = (void *)&vybrid_data, },
{ .compatible = "fsl,imx6sx-qspi", .data = (void *)&imx6sx_data, },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, fsl_qspi_dt_ids);
static void fsl_qspi_set_base_addr(struct fsl_qspi *q, struct spi_nor *nor)
{
q->chip_base_addr = q->nor_size * (nor - q->nor);
}
static int fsl_qspi_read_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len)
{
int ret;
struct fsl_qspi *q = nor->priv;
ret = fsl_qspi_runcmd(q, opcode, 0, len);
if (ret)
return ret;
fsl_qspi_read_data(q, len, buf);
return 0;
}
static int fsl_qspi_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len,
int write_enable)
{
struct fsl_qspi *q = nor->priv;
int ret;
if (!buf) {
ret = fsl_qspi_runcmd(q, opcode, 0, 1);
if (ret)
return ret;
if (opcode == SPINOR_OP_CHIP_ERASE)
fsl_qspi_invalid(q);
} else if (len > 0) {
ret = fsl_qspi_nor_write(q, nor, opcode, 0,
(u32 *)buf, len, NULL);
} else {
dev_err(q->dev, "invalid cmd %d\n", opcode);
ret = -EINVAL;
}
return ret;
}
static void fsl_qspi_write(struct spi_nor *nor, loff_t to,
size_t len, size_t *retlen, const u_char *buf)
{
struct fsl_qspi *q = nor->priv;
fsl_qspi_nor_write(q, nor, nor->program_opcode, to,
(u32 *)buf, len, retlen);
/* invalid the data in the AHB buffer. */
fsl_qspi_invalid(q);
}
static int fsl_qspi_read(struct spi_nor *nor, loff_t from,
size_t len, size_t *retlen, u_char *buf)
{
struct fsl_qspi *q = nor->priv;
u8 cmd = nor->read_opcode;
int ret;
dev_dbg(q->dev, "cmd [%x],read from (0x%p, 0x%.8x, 0x%.8x),len:%d\n",
cmd, q->ahb_base, q->chip_base_addr, (unsigned int)from, len);
/* Wait until the previous command is finished. */
ret = nor->wait_till_ready(nor);
if (ret)
return ret;
/* Read out the data directly from the AHB buffer.*/
memcpy(buf, q->ahb_base + q->chip_base_addr + from, len);
*retlen += len;
return 0;
}
static int fsl_qspi_erase(struct spi_nor *nor, loff_t offs)
{
struct fsl_qspi *q = nor->priv;
int ret;
dev_dbg(nor->dev, "%dKiB at 0x%08x:0x%08x\n",
nor->mtd->erasesize / 1024, q->chip_base_addr, (u32)offs);
/* Wait until finished previous write command. */
ret = nor->wait_till_ready(nor);
if (ret)
return ret;
/* Send write enable, then erase commands. */
ret = nor->write_reg(nor, SPINOR_OP_WREN, NULL, 0, 0);
if (ret)
return ret;
ret = fsl_qspi_runcmd(q, nor->erase_opcode, offs, 0);
if (ret)
return ret;
fsl_qspi_invalid(q);
return 0;
}
static int fsl_qspi_prep(struct spi_nor *nor, enum spi_nor_ops ops)
{
struct fsl_qspi *q = nor->priv;
int ret;
ret = clk_enable(q->clk_en);
if (ret)
return ret;
ret = clk_enable(q->clk);
if (ret) {
clk_disable(q->clk_en);
return ret;
}
fsl_qspi_set_base_addr(q, nor);
return 0;
}
static void fsl_qspi_unprep(struct spi_nor *nor, enum spi_nor_ops ops)
{
struct fsl_qspi *q = nor->priv;
clk_disable(q->clk);
clk_disable(q->clk_en);
}
static int fsl_qspi_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct mtd_part_parser_data ppdata;
struct device *dev = &pdev->dev;
struct fsl_qspi *q;
struct resource *res;
struct spi_nor *nor;
struct mtd_info *mtd;
int ret, i = 0;
bool has_second_chip = false;
const struct of_device_id *of_id =
of_match_device(fsl_qspi_dt_ids, &pdev->dev);
q = devm_kzalloc(dev, sizeof(*q), GFP_KERNEL);
if (!q)
return -ENOMEM;
q->nor_num = of_get_child_count(dev->of_node);
if (!q->nor_num || q->nor_num > FSL_QSPI_MAX_CHIP)
return -ENODEV;
/* find the resources */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "QuadSPI");
q->iobase = devm_ioremap_resource(dev, res);
if (IS_ERR(q->iobase)) {
ret = PTR_ERR(q->iobase);
goto map_failed;
}
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
"QuadSPI-memory");
q->ahb_base = devm_ioremap_resource(dev, res);
if (IS_ERR(q->ahb_base)) {
ret = PTR_ERR(q->ahb_base);
goto map_failed;
}
q->memmap_phy = res->start;
/* find the clocks */
q->clk_en = devm_clk_get(dev, "qspi_en");
if (IS_ERR(q->clk_en)) {
ret = PTR_ERR(q->clk_en);
goto map_failed;
}
q->clk = devm_clk_get(dev, "qspi");
if (IS_ERR(q->clk)) {
ret = PTR_ERR(q->clk);
goto map_failed;
}
ret = clk_prepare_enable(q->clk_en);
if (ret) {
dev_err(dev, "can not enable the qspi_en clock\n");
goto map_failed;
}
ret = clk_prepare_enable(q->clk);
if (ret) {
clk_disable_unprepare(q->clk_en);
dev_err(dev, "can not enable the qspi clock\n");
goto map_failed;
}
/* find the irq */
ret = platform_get_irq(pdev, 0);
if (ret < 0) {
dev_err(dev, "failed to get the irq\n");
goto irq_failed;
}
ret = devm_request_irq(dev, ret,
fsl_qspi_irq_handler, 0, pdev->name, q);
if (ret) {
dev_err(dev, "failed to request irq.\n");
goto irq_failed;
}
q->dev = dev;
q->devtype_data = (struct fsl_qspi_devtype_data *)of_id->data;
platform_set_drvdata(pdev, q);
ret = fsl_qspi_nor_setup(q);
if (ret)
goto irq_failed;
if (of_get_property(np, "fsl,qspi-has-second-chip", NULL))
has_second_chip = true;
/* iterate the subnodes. */
for_each_available_child_of_node(dev->of_node, np) {
const struct spi_device_id *id;
char modalias[40];
/* skip the holes */
if (!has_second_chip)
i *= 2;
nor = &q->nor[i];
mtd = &q->mtd[i];
nor->mtd = mtd;
nor->dev = dev;
nor->priv = q;
mtd->priv = nor;
/* fill the hooks */
nor->read_reg = fsl_qspi_read_reg;
nor->write_reg = fsl_qspi_write_reg;
nor->read = fsl_qspi_read;
nor->write = fsl_qspi_write;
nor->erase = fsl_qspi_erase;
nor->prepare = fsl_qspi_prep;
nor->unprepare = fsl_qspi_unprep;
if (of_modalias_node(np, modalias, sizeof(modalias)) < 0)
goto map_failed;
id = spi_nor_match_id(modalias);
if (!id)
goto map_failed;
ret = of_property_read_u32(np, "spi-max-frequency",
&q->clk_rate);
if (ret < 0)
goto map_failed;
/* set the chip address for READID */
fsl_qspi_set_base_addr(q, nor);
ret = spi_nor_scan(nor, id, SPI_NOR_QUAD);
if (ret)
goto map_failed;
ppdata.of_node = np;
ret = mtd_device_parse_register(mtd, NULL, &ppdata, NULL, 0);
if (ret)
goto map_failed;
/* Set the correct NOR size now. */
if (q->nor_size == 0) {
q->nor_size = mtd->size;
/* Map the SPI NOR to accessiable address */
fsl_qspi_set_map_addr(q);
}
/*
* The TX FIFO is 64 bytes in the Vybrid, but the Page Program
* may writes 265 bytes per time. The write is working in the
* unit of the TX FIFO, not in the unit of the SPI NOR's page
* size.
*
* So shrink the spi_nor->page_size if it is larger then the
* TX FIFO.
*/
if (nor->page_size > q->devtype_data->txfifo)
nor->page_size = q->devtype_data->txfifo;
i++;
}
/* finish the rest init. */
ret = fsl_qspi_nor_setup_last(q);
if (ret)
goto last_init_failed;
clk_disable(q->clk);
clk_disable(q->clk_en);
dev_info(dev, "QuadSPI SPI NOR flash driver\n");
return 0;
last_init_failed:
for (i = 0; i < q->nor_num; i++)
mtd_device_unregister(&q->mtd[i]);
irq_failed:
clk_disable_unprepare(q->clk);
clk_disable_unprepare(q->clk_en);
map_failed:
dev_err(dev, "Freescale QuadSPI probe failed\n");
return ret;
}
static int fsl_qspi_remove(struct platform_device *pdev)
{
struct fsl_qspi *q = platform_get_drvdata(pdev);
int i;
for (i = 0; i < q->nor_num; i++)
mtd_device_unregister(&q->mtd[i]);
/* disable the hardware */
writel(QUADSPI_MCR_MDIS_MASK, q->iobase + QUADSPI_MCR);
writel(0x0, q->iobase + QUADSPI_RSER);
clk_unprepare(q->clk);
clk_unprepare(q->clk_en);
return 0;
}
static struct platform_driver fsl_qspi_driver = {
.driver = {
.name = "fsl-quadspi",
.bus = &platform_bus_type,
.owner = THIS_MODULE,
.of_match_table = fsl_qspi_dt_ids,
},
.probe = fsl_qspi_probe,
.remove = fsl_qspi_remove,
};
module_platform_driver(fsl_qspi_driver);
MODULE_DESCRIPTION("Freescale QuadSPI Controller Driver");
MODULE_AUTHOR("Freescale Semiconductor Inc.");
MODULE_LICENSE("GPL v2");
/*
* Based on m25p80.c, by Mike Lavender (mike@steroidmicros.com), with
* influence from lart.c (Abraham Van Der Merwe) and mtd_dataflash.c
*
* Copyright (C) 2005, Intec Automation Inc.
* Copyright (C) 2014, Freescale Semiconductor, Inc.
*
* This code is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/err.h>
#include <linux/errno.h>
#include <linux/module.h>
#include <linux/device.h>
#include <linux/mutex.h>
#include <linux/math64.h>
#include <linux/mtd/cfi.h>
#include <linux/mtd/mtd.h>
#include <linux/of_platform.h>
#include <linux/spi/flash.h>
#include <linux/mtd/spi-nor.h>
/* Define max times to check status register before we give up. */
#define MAX_READY_WAIT_JIFFIES (40 * HZ) /* M25P16 specs 40s max chip erase */
#define JEDEC_MFR(_jedec_id) ((_jedec_id) >> 16)
/*
* Read the status register, returning its value in the location
* Return the status register value.
* Returns negative if error occurred.
*/
static int read_sr(struct spi_nor *nor)
{
int ret;
u8 val;
ret = nor->read_reg(nor, SPINOR_OP_RDSR, &val, 1);
if (ret < 0) {
pr_err("error %d reading SR\n", (int) ret);
return ret;
}
return val;
}
/*
* Read configuration register, returning its value in the
* location. Return the configuration register value.
* Returns negative if error occured.
*/
static int read_cr(struct spi_nor *nor)
{
int ret;
u8 val;
ret = nor->read_reg(nor, SPINOR_OP_RDCR, &val, 1);
if (ret < 0) {
dev_err(nor->dev, "error %d reading CR\n", ret);
return ret;
}
return val;
}
/*
* Dummy Cycle calculation for different type of read.
* It can be used to support more commands with
* different dummy cycle requirements.
*/
static inline int spi_nor_read_dummy_cycles(struct spi_nor *nor)
{
switch (nor->flash_read) {
case SPI_NOR_FAST:
case SPI_NOR_DUAL:
case SPI_NOR_QUAD:
return 1;
case SPI_NOR_NORMAL:
return 0;
}
return 0;
}
/*
* Write status register 1 byte
* Returns negative if error occurred.
*/
static inline int write_sr(struct spi_nor *nor, u8 val)
{
nor->cmd_buf[0] = val;
return nor->write_reg(nor, SPINOR_OP_WRSR, nor->cmd_buf, 1, 0);
}
/*
* Set write enable latch with Write Enable command.
* Returns negative if error occurred.
*/
static inline int write_enable(struct spi_nor *nor)
{
return nor->write_reg(nor, SPINOR_OP_WREN, NULL, 0, 0);
}
/*
* Send write disble instruction to the chip.
*/
static inline int write_disable(struct spi_nor *nor)
{
return nor->write_reg(nor, SPINOR_OP_WRDI, NULL, 0, 0);
}
static inline struct spi_nor *mtd_to_spi_nor(struct mtd_info *mtd)
{
return mtd->priv;
}
/* Enable/disable 4-byte addressing mode. */
static inline int set_4byte(struct spi_nor *nor, u32 jedec_id, int enable)
{
int status;
bool need_wren = false;
u8 cmd;
switch (JEDEC_MFR(jedec_id)) {
case CFI_MFR_ST: /* Micron, actually */
/* Some Micron need WREN command; all will accept it */
need_wren = true;
case CFI_MFR_MACRONIX:
case 0xEF /* winbond */:
if (need_wren)
write_enable(nor);
cmd = enable ? SPINOR_OP_EN4B : SPINOR_OP_EX4B;
status = nor->write_reg(nor, cmd, NULL, 0, 0);
if (need_wren)
write_disable(nor);
return status;
default:
/* Spansion style */
nor->cmd_buf[0] = enable << 7;
return nor->write_reg(nor, SPINOR_OP_BRWR, nor->cmd_buf, 1, 0);
}
}
static int spi_nor_wait_till_ready(struct spi_nor *nor)
{
unsigned long deadline;
int sr;
deadline = jiffies + MAX_READY_WAIT_JIFFIES;
do {
cond_resched();
sr = read_sr(nor);
if (sr < 0)
break;
else if (!(sr & SR_WIP))
return 0;
} while (!time_after_eq(jiffies, deadline));
return -ETIMEDOUT;
}
/*
* Service routine to read status register until ready, or timeout occurs.
* Returns non-zero if error.
*/
static int wait_till_ready(struct spi_nor *nor)
{
return nor->wait_till_ready(nor);
}
/*
* Erase the whole flash memory
*
* Returns 0 if successful, non-zero otherwise.
*/
static int erase_chip(struct spi_nor *nor)
{
int ret;
dev_dbg(nor->dev, " %lldKiB\n", (long long)(nor->mtd->size >> 10));
/* Wait until finished previous write command. */
ret = wait_till_ready(nor);
if (ret)
return ret;
/* Send write enable, then erase commands. */
write_enable(nor);
return nor->write_reg(nor, SPINOR_OP_CHIP_ERASE, NULL, 0, 0);
}
static int spi_nor_lock_and_prep(struct spi_nor *nor, enum spi_nor_ops ops)
{
int ret = 0;
mutex_lock(&nor->lock);
if (nor->prepare) {
ret = nor->prepare(nor, ops);
if (ret) {
dev_err(nor->dev, "failed in the preparation.\n");
mutex_unlock(&nor->lock);
return ret;
}
}
return ret;
}
static void spi_nor_unlock_and_unprep(struct spi_nor *nor, enum spi_nor_ops ops)
{
if (nor->unprepare)
nor->unprepare(nor, ops);
mutex_unlock(&nor->lock);
}
/*
* Erase an address range on the nor chip. The address range may extend
* one or more erase sectors. Return an error is there is a problem erasing.
*/
static int spi_nor_erase(struct mtd_info *mtd, struct erase_info *instr)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
u32 addr, len;
uint32_t rem;
int ret;
dev_dbg(nor->dev, "at 0x%llx, len %lld\n", (long long)instr->addr,
(long long)instr->len);
div_u64_rem(instr->len, mtd->erasesize, &rem);
if (rem)
return -EINVAL;
addr = instr->addr;
len = instr->len;
ret = spi_nor_lock_and_prep(nor, SPI_NOR_OPS_ERASE);
if (ret)
return ret;
/* whole-chip erase? */
if (len == mtd->size) {
if (erase_chip(nor)) {
ret = -EIO;
goto erase_err;
}
/* REVISIT in some cases we could speed up erasing large regions
* by using SPINOR_OP_SE instead of SPINOR_OP_BE_4K. We may have set up
* to use "small sector erase", but that's not always optimal.
*/
/* "sector"-at-a-time erase */
} else {
while (len) {
if (nor->erase(nor, addr)) {
ret = -EIO;
goto erase_err;
}
addr += mtd->erasesize;
len -= mtd->erasesize;
}
}
spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_ERASE);
instr->state = MTD_ERASE_DONE;
mtd_erase_callback(instr);
return ret;
erase_err:
spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_ERASE);
instr->state = MTD_ERASE_FAILED;
return ret;
}
static int spi_nor_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
uint32_t offset = ofs;
uint8_t status_old, status_new;
int ret = 0;
ret = spi_nor_lock_and_prep(nor, SPI_NOR_OPS_LOCK);
if (ret)
return ret;
/* Wait until finished previous command */
ret = wait_till_ready(nor);
if (ret)
goto err;
status_old = read_sr(nor);
if (offset < mtd->size - (mtd->size / 2))
status_new = status_old | SR_BP2 | SR_BP1 | SR_BP0;
else if (offset < mtd->size - (mtd->size / 4))
status_new = (status_old & ~SR_BP0) | SR_BP2 | SR_BP1;
else if (offset < mtd->size - (mtd->size / 8))
status_new = (status_old & ~SR_BP1) | SR_BP2 | SR_BP0;
else if (offset < mtd->size - (mtd->size / 16))
status_new = (status_old & ~(SR_BP0 | SR_BP1)) | SR_BP2;
else if (offset < mtd->size - (mtd->size / 32))
status_new = (status_old & ~SR_BP2) | SR_BP1 | SR_BP0;
else if (offset < mtd->size - (mtd->size / 64))
status_new = (status_old & ~(SR_BP2 | SR_BP0)) | SR_BP1;
else
status_new = (status_old & ~(SR_BP2 | SR_BP1)) | SR_BP0;
/* Only modify protection if it will not unlock other areas */
if ((status_new & (SR_BP2 | SR_BP1 | SR_BP0)) >
(status_old & (SR_BP2 | SR_BP1 | SR_BP0))) {
write_enable(nor);
ret = write_sr(nor, status_new);
if (ret)
goto err;
}
err:
spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_LOCK);
return ret;
}
static int spi_nor_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
uint32_t offset = ofs;
uint8_t status_old, status_new;
int ret = 0;
ret = spi_nor_lock_and_prep(nor, SPI_NOR_OPS_UNLOCK);
if (ret)
return ret;
/* Wait until finished previous command */
ret = wait_till_ready(nor);
if (ret)
goto err;
status_old = read_sr(nor);
if (offset+len > mtd->size - (mtd->size / 64))
status_new = status_old & ~(SR_BP2 | SR_BP1 | SR_BP0);
else if (offset+len > mtd->size - (mtd->size / 32))
status_new = (status_old & ~(SR_BP2 | SR_BP1)) | SR_BP0;
else if (offset+len > mtd->size - (mtd->size / 16))
status_new = (status_old & ~(SR_BP2 | SR_BP0)) | SR_BP1;
else if (offset+len > mtd->size - (mtd->size / 8))
status_new = (status_old & ~SR_BP2) | SR_BP1 | SR_BP0;
else if (offset+len > mtd->size - (mtd->size / 4))
status_new = (status_old & ~(SR_BP0 | SR_BP1)) | SR_BP2;
else if (offset+len > mtd->size - (mtd->size / 2))
status_new = (status_old & ~SR_BP1) | SR_BP2 | SR_BP0;
else
status_new = (status_old & ~SR_BP0) | SR_BP2 | SR_BP1;
/* Only modify protection if it will not lock other areas */
if ((status_new & (SR_BP2 | SR_BP1 | SR_BP0)) <
(status_old & (SR_BP2 | SR_BP1 | SR_BP0))) {
write_enable(nor);
ret = write_sr(nor, status_new);
if (ret)
goto err;
}
err:
spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_UNLOCK);
return ret;
}
struct flash_info {
/* JEDEC id zero means "no ID" (most older chips); otherwise it has
* a high byte of zero plus three data bytes: the manufacturer id,
* then a two byte device id.
*/
u32 jedec_id;
u16 ext_id;
/* The size listed here is what works with SPINOR_OP_SE, which isn't
* necessarily called a "sector" by the vendor.
*/
unsigned sector_size;
u16 n_sectors;
u16 page_size;
u16 addr_width;
u16 flags;
#define SECT_4K 0x01 /* SPINOR_OP_BE_4K works uniformly */
#define SPI_NOR_NO_ERASE 0x02 /* No erase command needed */
#define SST_WRITE 0x04 /* use SST byte programming */
#define SPI_NOR_NO_FR 0x08 /* Can't do fastread */
#define SECT_4K_PMC 0x10 /* SPINOR_OP_BE_4K_PMC works uniformly */
#define SPI_NOR_DUAL_READ 0x20 /* Flash supports Dual Read */
#define SPI_NOR_QUAD_READ 0x40 /* Flash supports Quad Read */
};
#define INFO(_jedec_id, _ext_id, _sector_size, _n_sectors, _flags) \
((kernel_ulong_t)&(struct flash_info) { \
.jedec_id = (_jedec_id), \
.ext_id = (_ext_id), \
.sector_size = (_sector_size), \
.n_sectors = (_n_sectors), \
.page_size = 256, \
.flags = (_flags), \
})
#define CAT25_INFO(_sector_size, _n_sectors, _page_size, _addr_width, _flags) \
((kernel_ulong_t)&(struct flash_info) { \
.sector_size = (_sector_size), \
.n_sectors = (_n_sectors), \
.page_size = (_page_size), \
.addr_width = (_addr_width), \
.flags = (_flags), \
})
/* NOTE: double check command sets and memory organization when you add
* more nor chips. This current list focusses on newer chips, which
* have been converging on command sets which including JEDEC ID.
*/
const struct spi_device_id spi_nor_ids[] = {
/* Atmel -- some are (confusingly) marketed as "DataFlash" */
{ "at25fs010", INFO(0x1f6601, 0, 32 * 1024, 4, SECT_4K) },
{ "at25fs040", INFO(0x1f6604, 0, 64 * 1024, 8, SECT_4K) },
{ "at25df041a", INFO(0x1f4401, 0, 64 * 1024, 8, SECT_4K) },
{ "at25df321a", INFO(0x1f4701, 0, 64 * 1024, 64, SECT_4K) },
{ "at25df641", INFO(0x1f4800, 0, 64 * 1024, 128, SECT_4K) },
{ "at26f004", INFO(0x1f0400, 0, 64 * 1024, 8, SECT_4K) },
{ "at26df081a", INFO(0x1f4501, 0, 64 * 1024, 16, SECT_4K) },
{ "at26df161a", INFO(0x1f4601, 0, 64 * 1024, 32, SECT_4K) },
{ "at26df321", INFO(0x1f4700, 0, 64 * 1024, 64, SECT_4K) },
{ "at45db081d", INFO(0x1f2500, 0, 64 * 1024, 16, SECT_4K) },
/* EON -- en25xxx */
{ "en25f32", INFO(0x1c3116, 0, 64 * 1024, 64, SECT_4K) },
{ "en25p32", INFO(0x1c2016, 0, 64 * 1024, 64, 0) },
{ "en25q32b", INFO(0x1c3016, 0, 64 * 1024, 64, 0) },
{ "en25p64", INFO(0x1c2017, 0, 64 * 1024, 128, 0) },
{ "en25q64", INFO(0x1c3017, 0, 64 * 1024, 128, SECT_4K) },
{ "en25qh256", INFO(0x1c7019, 0, 64 * 1024, 512, 0) },
/* ESMT */
{ "f25l32pa", INFO(0x8c2016, 0, 64 * 1024, 64, SECT_4K) },
/* Everspin */
{ "mr25h256", CAT25_INFO( 32 * 1024, 1, 256, 2, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) },
{ "mr25h10", CAT25_INFO(128 * 1024, 1, 256, 3, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) },
/* GigaDevice */
{ "gd25q32", INFO(0xc84016, 0, 64 * 1024, 64, SECT_4K) },
{ "gd25q64", INFO(0xc84017, 0, 64 * 1024, 128, SECT_4K) },
/* Intel/Numonyx -- xxxs33b */
{ "160s33b", INFO(0x898911, 0, 64 * 1024, 32, 0) },
{ "320s33b", INFO(0x898912, 0, 64 * 1024, 64, 0) },
{ "640s33b", INFO(0x898913, 0, 64 * 1024, 128, 0) },
/* Macronix */
{ "mx25l2005a", INFO(0xc22012, 0, 64 * 1024, 4, SECT_4K) },
{ "mx25l4005a", INFO(0xc22013, 0, 64 * 1024, 8, SECT_4K) },
{ "mx25l8005", INFO(0xc22014, 0, 64 * 1024, 16, 0) },
{ "mx25l1606e", INFO(0xc22015, 0, 64 * 1024, 32, SECT_4K) },
{ "mx25l3205d", INFO(0xc22016, 0, 64 * 1024, 64, 0) },
{ "mx25l3255e", INFO(0xc29e16, 0, 64 * 1024, 64, SECT_4K) },
{ "mx25l6405d", INFO(0xc22017, 0, 64 * 1024, 128, 0) },
{ "mx25l12805d", INFO(0xc22018, 0, 64 * 1024, 256, 0) },
{ "mx25l12855e", INFO(0xc22618, 0, 64 * 1024, 256, 0) },
{ "mx25l25635e", INFO(0xc22019, 0, 64 * 1024, 512, 0) },
{ "mx25l25655e", INFO(0xc22619, 0, 64 * 1024, 512, 0) },
{ "mx66l51235l", INFO(0xc2201a, 0, 64 * 1024, 1024, SPI_NOR_QUAD_READ) },
{ "mx66l1g55g", INFO(0xc2261b, 0, 64 * 1024, 2048, SPI_NOR_QUAD_READ) },
/* Micron */
{ "n25q064", INFO(0x20ba17, 0, 64 * 1024, 128, 0) },
{ "n25q128a11", INFO(0x20bb18, 0, 64 * 1024, 256, 0) },
{ "n25q128a13", INFO(0x20ba18, 0, 64 * 1024, 256, 0) },
{ "n25q256a", INFO(0x20ba19, 0, 64 * 1024, 512, SECT_4K) },
{ "n25q512a", INFO(0x20bb20, 0, 64 * 1024, 1024, SECT_4K) },
/* PMC */
{ "pm25lv512", INFO(0, 0, 32 * 1024, 2, SECT_4K_PMC) },
{ "pm25lv010", INFO(0, 0, 32 * 1024, 4, SECT_4K_PMC) },
{ "pm25lq032", INFO(0x7f9d46, 0, 64 * 1024, 64, SECT_4K) },
/* Spansion -- single (large) sector size only, at least
* for the chips listed here (without boot sectors).
*/
{ "s25sl032p", INFO(0x010215, 0x4d00, 64 * 1024, 64, SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
{ "s25sl064p", INFO(0x010216, 0x4d00, 64 * 1024, 128, 0) },
{ "s25fl256s0", INFO(0x010219, 0x4d00, 256 * 1024, 128, 0) },
{ "s25fl256s1", INFO(0x010219, 0x4d01, 64 * 1024, 512, SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
{ "s25fl512s", INFO(0x010220, 0x4d00, 256 * 1024, 256, SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
{ "s70fl01gs", INFO(0x010221, 0x4d00, 256 * 1024, 256, 0) },
{ "s25sl12800", INFO(0x012018, 0x0300, 256 * 1024, 64, 0) },
{ "s25sl12801", INFO(0x012018, 0x0301, 64 * 1024, 256, 0) },
{ "s25fl129p0", INFO(0x012018, 0x4d00, 256 * 1024, 64, 0) },
{ "s25fl129p1", INFO(0x012018, 0x4d01, 64 * 1024, 256, 0) },
{ "s25sl004a", INFO(0x010212, 0, 64 * 1024, 8, 0) },
{ "s25sl008a", INFO(0x010213, 0, 64 * 1024, 16, 0) },
{ "s25sl016a", INFO(0x010214, 0, 64 * 1024, 32, 0) },
{ "s25sl032a", INFO(0x010215, 0, 64 * 1024, 64, 0) },
{ "s25sl064a", INFO(0x010216, 0, 64 * 1024, 128, 0) },
{ "s25fl008k", INFO(0xef4014, 0, 64 * 1024, 16, SECT_4K) },
{ "s25fl016k", INFO(0xef4015, 0, 64 * 1024, 32, SECT_4K) },
{ "s25fl064k", INFO(0xef4017, 0, 64 * 1024, 128, SECT_4K) },
/* SST -- large erase sizes are "overlays", "sectors" are 4K */
{ "sst25vf040b", INFO(0xbf258d, 0, 64 * 1024, 8, SECT_4K | SST_WRITE) },
{ "sst25vf080b", INFO(0xbf258e, 0, 64 * 1024, 16, SECT_4K | SST_WRITE) },
{ "sst25vf016b", INFO(0xbf2541, 0, 64 * 1024, 32, SECT_4K | SST_WRITE) },
{ "sst25vf032b", INFO(0xbf254a, 0, 64 * 1024, 64, SECT_4K | SST_WRITE) },
{ "sst25vf064c", INFO(0xbf254b, 0, 64 * 1024, 128, SECT_4K) },
{ "sst25wf512", INFO(0xbf2501, 0, 64 * 1024, 1, SECT_4K | SST_WRITE) },
{ "sst25wf010", INFO(0xbf2502, 0, 64 * 1024, 2, SECT_4K | SST_WRITE) },
{ "sst25wf020", INFO(0xbf2503, 0, 64 * 1024, 4, SECT_4K | SST_WRITE) },
{ "sst25wf040", INFO(0xbf2504, 0, 64 * 1024, 8, SECT_4K | SST_WRITE) },
/* ST Microelectronics -- newer production may have feature updates */
{ "m25p05", INFO(0x202010, 0, 32 * 1024, 2, 0) },
{ "m25p10", INFO(0x202011, 0, 32 * 1024, 4, 0) },
{ "m25p20", INFO(0x202012, 0, 64 * 1024, 4, 0) },
{ "m25p40", INFO(0x202013, 0, 64 * 1024, 8, 0) },
{ "m25p80", INFO(0x202014, 0, 64 * 1024, 16, 0) },
{ "m25p16", INFO(0x202015, 0, 64 * 1024, 32, 0) },
{ "m25p32", INFO(0x202016, 0, 64 * 1024, 64, 0) },
{ "m25p64", INFO(0x202017, 0, 64 * 1024, 128, 0) },
{ "m25p128", INFO(0x202018, 0, 256 * 1024, 64, 0) },
{ "n25q032", INFO(0x20ba16, 0, 64 * 1024, 64, 0) },
{ "m25p05-nonjedec", INFO(0, 0, 32 * 1024, 2, 0) },
{ "m25p10-nonjedec", INFO(0, 0, 32 * 1024, 4, 0) },
{ "m25p20-nonjedec", INFO(0, 0, 64 * 1024, 4, 0) },
{ "m25p40-nonjedec", INFO(0, 0, 64 * 1024, 8, 0) },
{ "m25p80-nonjedec", INFO(0, 0, 64 * 1024, 16, 0) },
{ "m25p16-nonjedec", INFO(0, 0, 64 * 1024, 32, 0) },
{ "m25p32-nonjedec", INFO(0, 0, 64 * 1024, 64, 0) },
{ "m25p64-nonjedec", INFO(0, 0, 64 * 1024, 128, 0) },
{ "m25p128-nonjedec", INFO(0, 0, 256 * 1024, 64, 0) },
{ "m45pe10", INFO(0x204011, 0, 64 * 1024, 2, 0) },
{ "m45pe80", INFO(0x204014, 0, 64 * 1024, 16, 0) },
{ "m45pe16", INFO(0x204015, 0, 64 * 1024, 32, 0) },
{ "m25pe20", INFO(0x208012, 0, 64 * 1024, 4, 0) },
{ "m25pe80", INFO(0x208014, 0, 64 * 1024, 16, 0) },
{ "m25pe16", INFO(0x208015, 0, 64 * 1024, 32, SECT_4K) },
{ "m25px16", INFO(0x207115, 0, 64 * 1024, 32, SECT_4K) },
{ "m25px32", INFO(0x207116, 0, 64 * 1024, 64, SECT_4K) },
{ "m25px32-s0", INFO(0x207316, 0, 64 * 1024, 64, SECT_4K) },
{ "m25px32-s1", INFO(0x206316, 0, 64 * 1024, 64, SECT_4K) },
{ "m25px64", INFO(0x207117, 0, 64 * 1024, 128, 0) },
/* Winbond -- w25x "blocks" are 64K, "sectors" are 4KiB */
{ "w25x10", INFO(0xef3011, 0, 64 * 1024, 2, SECT_4K) },
{ "w25x20", INFO(0xef3012, 0, 64 * 1024, 4, SECT_4K) },
{ "w25x40", INFO(0xef3013, 0, 64 * 1024, 8, SECT_4K) },
{ "w25x80", INFO(0xef3014, 0, 64 * 1024, 16, SECT_4K) },
{ "w25x16", INFO(0xef3015, 0, 64 * 1024, 32, SECT_4K) },
{ "w25x32", INFO(0xef3016, 0, 64 * 1024, 64, SECT_4K) },
{ "w25q32", INFO(0xef4016, 0, 64 * 1024, 64, SECT_4K) },
{ "w25q32dw", INFO(0xef6016, 0, 64 * 1024, 64, SECT_4K) },
{ "w25x64", INFO(0xef3017, 0, 64 * 1024, 128, SECT_4K) },
{ "w25q64", INFO(0xef4017, 0, 64 * 1024, 128, SECT_4K) },
{ "w25q128", INFO(0xef4018, 0, 64 * 1024, 256, SECT_4K) },
{ "w25q80", INFO(0xef5014, 0, 64 * 1024, 16, SECT_4K) },
{ "w25q80bl", INFO(0xef4014, 0, 64 * 1024, 16, SECT_4K) },
{ "w25q128", INFO(0xef4018, 0, 64 * 1024, 256, SECT_4K) },
{ "w25q256", INFO(0xef4019, 0, 64 * 1024, 512, SECT_4K) },
/* Catalyst / On Semiconductor -- non-JEDEC */
{ "cat25c11", CAT25_INFO( 16, 8, 16, 1, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) },
{ "cat25c03", CAT25_INFO( 32, 8, 16, 2, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) },
{ "cat25c09", CAT25_INFO( 128, 8, 32, 2, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) },
{ "cat25c17", CAT25_INFO( 256, 8, 32, 2, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) },
{ "cat25128", CAT25_INFO(2048, 8, 64, 2, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) },
{ },
};
EXPORT_SYMBOL_GPL(spi_nor_ids);
static const struct spi_device_id *spi_nor_read_id(struct spi_nor *nor)
{
int tmp;
u8 id[5];
u32 jedec;
u16 ext_jedec;
struct flash_info *info;
tmp = nor->read_reg(nor, SPINOR_OP_RDID, id, 5);
if (tmp < 0) {
dev_dbg(nor->dev, " error %d reading JEDEC ID\n", tmp);
return ERR_PTR(tmp);
}
jedec = id[0];
jedec = jedec << 8;
jedec |= id[1];
jedec = jedec << 8;
jedec |= id[2];
ext_jedec = id[3] << 8 | id[4];
for (tmp = 0; tmp < ARRAY_SIZE(spi_nor_ids) - 1; tmp++) {
info = (void *)spi_nor_ids[tmp].driver_data;
if (info->jedec_id == jedec) {
if (info->ext_id == 0 || info->ext_id == ext_jedec)
return &spi_nor_ids[tmp];
}
}
dev_err(nor->dev, "unrecognized JEDEC id %06x\n", jedec);
return ERR_PTR(-ENODEV);
}
static const struct spi_device_id *jedec_probe(struct spi_nor *nor)
{
return nor->read_id(nor);
}
static int spi_nor_read(struct mtd_info *mtd, loff_t from, size_t len,
size_t *retlen, u_char *buf)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
int ret;
dev_dbg(nor->dev, "from 0x%08x, len %zd\n", (u32)from, len);
ret = spi_nor_lock_and_prep(nor, SPI_NOR_OPS_READ);
if (ret)
return ret;
ret = nor->read(nor, from, len, retlen, buf);
spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_READ);
return ret;
}
static int sst_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
size_t actual;
int ret;
dev_dbg(nor->dev, "to 0x%08x, len %zd\n", (u32)to, len);
ret = spi_nor_lock_and_prep(nor, SPI_NOR_OPS_WRITE);
if (ret)
return ret;
/* Wait until finished previous write command. */
ret = wait_till_ready(nor);
if (ret)
goto time_out;
write_enable(nor);
nor->sst_write_second = false;
actual = to % 2;
/* Start write from odd address. */
if (actual) {
nor->program_opcode = SPINOR_OP_BP;
/* write one byte. */
nor->write(nor, to, 1, retlen, buf);
ret = wait_till_ready(nor);
if (ret)
goto time_out;
}
to += actual;
/* Write out most of the data here. */
for (; actual < len - 1; actual += 2) {
nor->program_opcode = SPINOR_OP_AAI_WP;
/* write two bytes. */
nor->write(nor, to, 2, retlen, buf + actual);
ret = wait_till_ready(nor);
if (ret)
goto time_out;
to += 2;
nor->sst_write_second = true;
}
nor->sst_write_second = false;
write_disable(nor);
ret = wait_till_ready(nor);
if (ret)
goto time_out;
/* Write out trailing byte if it exists. */
if (actual != len) {
write_enable(nor);
nor->program_opcode = SPINOR_OP_BP;
nor->write(nor, to, 1, retlen, buf + actual);
ret = wait_till_ready(nor);
if (ret)
goto time_out;
write_disable(nor);
}
time_out:
spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_WRITE);
return ret;
}
/*
* Write an address range to the nor chip. Data must be written in
* FLASH_PAGESIZE chunks. The address range may be any size provided
* it is within the physical boundaries.
*/
static int spi_nor_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
u32 page_offset, page_size, i;
int ret;
dev_dbg(nor->dev, "to 0x%08x, len %zd\n", (u32)to, len);
ret = spi_nor_lock_and_prep(nor, SPI_NOR_OPS_WRITE);
if (ret)
return ret;
/* Wait until finished previous write command. */
ret = wait_till_ready(nor);
if (ret)
goto write_err;
write_enable(nor);
page_offset = to & (nor->page_size - 1);
/* do all the bytes fit onto one page? */
if (page_offset + len <= nor->page_size) {
nor->write(nor, to, len, retlen, buf);
} else {
/* the size of data remaining on the first page */
page_size = nor->page_size - page_offset;
nor->write(nor, to, page_size, retlen, buf);
/* write everything in nor->page_size chunks */
for (i = page_size; i < len; i += page_size) {
page_size = len - i;
if (page_size > nor->page_size)
page_size = nor->page_size;
wait_till_ready(nor);
write_enable(nor);
nor->write(nor, to + i, page_size, retlen, buf + i);
}
}
write_err:
spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_WRITE);
return 0;
}
static int macronix_quad_enable(struct spi_nor *nor)
{
int ret, val;
val = read_sr(nor);
write_enable(nor);
nor->cmd_buf[0] = val | SR_QUAD_EN_MX;
nor->write_reg(nor, SPINOR_OP_WRSR, nor->cmd_buf, 1, 0);
if (wait_till_ready(nor))
return 1;
ret = read_sr(nor);
if (!(ret > 0 && (ret & SR_QUAD_EN_MX))) {
dev_err(nor->dev, "Macronix Quad bit not set\n");
return -EINVAL;
}
return 0;
}
/*
* Write status Register and configuration register with 2 bytes
* The first byte will be written to the status register, while the
* second byte will be written to the configuration register.
* Return negative if error occured.
*/
static int write_sr_cr(struct spi_nor *nor, u16 val)
{
nor->cmd_buf[0] = val & 0xff;
nor->cmd_buf[1] = (val >> 8);
return nor->write_reg(nor, SPINOR_OP_WRSR, nor->cmd_buf, 2, 0);
}
static int spansion_quad_enable(struct spi_nor *nor)
{
int ret;
int quad_en = CR_QUAD_EN_SPAN << 8;
write_enable(nor);
ret = write_sr_cr(nor, quad_en);
if (ret < 0) {
dev_err(nor->dev,
"error while writing configuration register\n");
return -EINVAL;
}
/* read back and check it */
ret = read_cr(nor);
if (!(ret > 0 && (ret & CR_QUAD_EN_SPAN))) {
dev_err(nor->dev, "Spansion Quad bit not set\n");
return -EINVAL;
}
return 0;
}
static int set_quad_mode(struct spi_nor *nor, u32 jedec_id)
{
int status;
switch (JEDEC_MFR(jedec_id)) {
case CFI_MFR_MACRONIX:
status = macronix_quad_enable(nor);
if (status) {
dev_err(nor->dev, "Macronix quad-read not enabled\n");
return -EINVAL;
}
return status;
default:
status = spansion_quad_enable(nor);
if (status) {
dev_err(nor->dev, "Spansion quad-read not enabled\n");
return -EINVAL;
}
return status;
}
}
static int spi_nor_check(struct spi_nor *nor)
{
if (!nor->dev || !nor->read || !nor->write ||
!nor->read_reg || !nor->write_reg || !nor->erase) {
pr_err("spi-nor: please fill all the necessary fields!\n");
return -EINVAL;
}
if (!nor->read_id)
nor->read_id = spi_nor_read_id;
if (!nor->wait_till_ready)
nor->wait_till_ready = spi_nor_wait_till_ready;
return 0;
}
int spi_nor_scan(struct spi_nor *nor, const struct spi_device_id *id,
enum read_mode mode)
{
struct flash_info *info;
struct flash_platform_data *data;
struct device *dev = nor->dev;
struct mtd_info *mtd = nor->mtd;
struct device_node *np = dev->of_node;
int ret;
int i;
ret = spi_nor_check(nor);
if (ret)
return ret;
/* Platform data helps sort out which chip type we have, as
* well as how this board partitions it. If we don't have
* a chip ID, try the JEDEC id commands; they'll work for most
* newer chips, even if we don't recognize the particular chip.
*/
data = dev_get_platdata(dev);
if (data && data->type) {
const struct spi_device_id *plat_id;
for (i = 0; i < ARRAY_SIZE(spi_nor_ids) - 1; i++) {
plat_id = &spi_nor_ids[i];
if (strcmp(data->type, plat_id->name))
continue;
break;
}
if (i < ARRAY_SIZE(spi_nor_ids) - 1)
id = plat_id;
else
dev_warn(dev, "unrecognized id %s\n", data->type);
}
info = (void *)id->driver_data;
if (info->jedec_id) {
const struct spi_device_id *jid;
jid = jedec_probe(nor);
if (IS_ERR(jid)) {
return PTR_ERR(jid);
} else if (jid != id) {
/*
* JEDEC knows better, so overwrite platform ID. We
* can't trust partitions any longer, but we'll let
* mtd apply them anyway, since some partitions may be
* marked read-only, and we don't want to lose that
* information, even if it's not 100% accurate.
*/
dev_warn(dev, "found %s, expected %s\n",
jid->name, id->name);
id = jid;
info = (void *)jid->driver_data;
}
}
mutex_init(&nor->lock);
/*
* Atmel, SST and Intel/Numonyx serial nor tend to power
* up with the software protection bits set
*/
if (JEDEC_MFR(info->jedec_id) == CFI_MFR_ATMEL ||
JEDEC_MFR(info->jedec_id) == CFI_MFR_INTEL ||
JEDEC_MFR(info->jedec_id) == CFI_MFR_SST) {
write_enable(nor);
write_sr(nor, 0);
}
if (data && data->name)
mtd->name = data->name;
else
mtd->name = dev_name(dev);
mtd->type = MTD_NORFLASH;
mtd->writesize = 1;
mtd->flags = MTD_CAP_NORFLASH;
mtd->size = info->sector_size * info->n_sectors;
mtd->_erase = spi_nor_erase;
mtd->_read = spi_nor_read;
/* nor protection support for STmicro chips */
if (JEDEC_MFR(info->jedec_id) == CFI_MFR_ST) {
mtd->_lock = spi_nor_lock;
mtd->_unlock = spi_nor_unlock;
}
/* sst nor chips use AAI word program */
if (info->flags & SST_WRITE)
mtd->_write = sst_write;
else
mtd->_write = spi_nor_write;
/* prefer "small sector" erase if possible */
if (info->flags & SECT_4K) {
nor->erase_opcode = SPINOR_OP_BE_4K;
mtd->erasesize = 4096;
} else if (info->flags & SECT_4K_PMC) {
nor->erase_opcode = SPINOR_OP_BE_4K_PMC;
mtd->erasesize = 4096;
} else {
nor->erase_opcode = SPINOR_OP_SE;
mtd->erasesize = info->sector_size;
}
if (info->flags & SPI_NOR_NO_ERASE)
mtd->flags |= MTD_NO_ERASE;
mtd->dev.parent = dev;
nor->page_size = info->page_size;
mtd->writebufsize = nor->page_size;
if (np) {
/* If we were instantiated by DT, use it */
if (of_property_read_bool(np, "m25p,fast-read"))
nor->flash_read = SPI_NOR_FAST;
else
nor->flash_read = SPI_NOR_NORMAL;
} else {
/* If we weren't instantiated by DT, default to fast-read */
nor->flash_read = SPI_NOR_FAST;
}
/* Some devices cannot do fast-read, no matter what DT tells us */
if (info->flags & SPI_NOR_NO_FR)
nor->flash_read = SPI_NOR_NORMAL;
/* Quad/Dual-read mode takes precedence over fast/normal */
if (mode == SPI_NOR_QUAD && info->flags & SPI_NOR_QUAD_READ) {
ret = set_quad_mode(nor, info->jedec_id);
if (ret) {
dev_err(dev, "quad mode not supported\n");
return ret;
}
nor->flash_read = SPI_NOR_QUAD;
} else if (mode == SPI_NOR_DUAL && info->flags & SPI_NOR_DUAL_READ) {
nor->flash_read = SPI_NOR_DUAL;
}
/* Default commands */
switch (nor->flash_read) {
case SPI_NOR_QUAD:
nor->read_opcode = SPINOR_OP_READ_1_1_4;
break;
case SPI_NOR_DUAL:
nor->read_opcode = SPINOR_OP_READ_1_1_2;
break;
case SPI_NOR_FAST:
nor->read_opcode = SPINOR_OP_READ_FAST;
break;
case SPI_NOR_NORMAL:
nor->read_opcode = SPINOR_OP_READ;
break;
default:
dev_err(dev, "No Read opcode defined\n");
return -EINVAL;
}
nor->program_opcode = SPINOR_OP_PP;
if (info->addr_width)
nor->addr_width = info->addr_width;
else if (mtd->size > 0x1000000) {
/* enable 4-byte addressing if the device exceeds 16MiB */
nor->addr_width = 4;
if (JEDEC_MFR(info->jedec_id) == CFI_MFR_AMD) {
/* Dedicated 4-byte command set */
switch (nor->flash_read) {
case SPI_NOR_QUAD:
nor->read_opcode = SPINOR_OP_READ4_1_1_4;
break;
case SPI_NOR_DUAL:
nor->read_opcode = SPINOR_OP_READ4_1_1_2;
break;
case SPI_NOR_FAST:
nor->read_opcode = SPINOR_OP_READ4_FAST;
break;
case SPI_NOR_NORMAL:
nor->read_opcode = SPINOR_OP_READ4;
break;
}
nor->program_opcode = SPINOR_OP_PP_4B;
/* No small sector erase for 4-byte command set */
nor->erase_opcode = SPINOR_OP_SE_4B;
mtd->erasesize = info->sector_size;
} else
set_4byte(nor, info->jedec_id, 1);
} else {
nor->addr_width = 3;
}
nor->read_dummy = spi_nor_read_dummy_cycles(nor);
dev_info(dev, "%s (%lld Kbytes)\n", id->name,
(long long)mtd->size >> 10);
dev_dbg(dev,
"mtd .name = %s, .size = 0x%llx (%lldMiB), "
".erasesize = 0x%.8x (%uKiB) .numeraseregions = %d\n",
mtd->name, (long long)mtd->size, (long long)(mtd->size >> 20),
mtd->erasesize, mtd->erasesize / 1024, mtd->numeraseregions);
if (mtd->numeraseregions)
for (i = 0; i < mtd->numeraseregions; i++)
dev_dbg(dev,
"mtd.eraseregions[%d] = { .offset = 0x%llx, "
".erasesize = 0x%.8x (%uKiB), "
".numblocks = %d }\n",
i, (long long)mtd->eraseregions[i].offset,
mtd->eraseregions[i].erasesize,
mtd->eraseregions[i].erasesize / 1024,
mtd->eraseregions[i].numblocks);
return 0;
}
EXPORT_SYMBOL_GPL(spi_nor_scan);
const struct spi_device_id *spi_nor_match_id(char *name)
{
const struct spi_device_id *id = spi_nor_ids;
while (id->name[0]) {
if (!strcmp(name, id->name))
return id;
id++;
}
return NULL;
}
EXPORT_SYMBOL_GPL(spi_nor_match_id);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Huang Shijie <shijie8@gmail.com>");
MODULE_AUTHOR("Mike Lavender");
MODULE_DESCRIPTION("framework for SPI NOR");
......@@ -69,8 +69,8 @@ static int write_eraseblock(int ebnum)
int err = 0;
loff_t addr = ebnum * mtd->erasesize;
prandom_bytes_state(&rnd_state, writebuf, use_len_max * pgcnt);
for (i = 0; i < pgcnt; ++i, addr += mtd->writesize) {
prandom_bytes_state(&rnd_state, writebuf, use_len);
ops.mode = MTD_OPS_AUTO_OOB;
ops.len = 0;
ops.retlen = 0;
......@@ -78,7 +78,7 @@ static int write_eraseblock(int ebnum)
ops.oobretlen = 0;
ops.ooboffs = use_offset;
ops.datbuf = NULL;
ops.oobbuf = writebuf;
ops.oobbuf = writebuf + (use_len_max * i) + use_offset;
err = mtd_write_oob(mtd, addr, &ops);
if (err || ops.oobretlen != use_len) {
pr_err("error: writeoob failed at %#llx\n",
......@@ -122,8 +122,8 @@ static int verify_eraseblock(int ebnum)
int err = 0;
loff_t addr = ebnum * mtd->erasesize;
prandom_bytes_state(&rnd_state, writebuf, use_len_max * pgcnt);
for (i = 0; i < pgcnt; ++i, addr += mtd->writesize) {
prandom_bytes_state(&rnd_state, writebuf, use_len);
ops.mode = MTD_OPS_AUTO_OOB;
ops.len = 0;
ops.retlen = 0;
......@@ -139,7 +139,8 @@ static int verify_eraseblock(int ebnum)
errcnt += 1;
return err ? err : -1;
}
if (memcmp(readbuf, writebuf, use_len)) {
if (memcmp(readbuf, writebuf + (use_len_max * i) + use_offset,
use_len)) {
pr_err("error: verify failed at %#llx\n",
(long long)addr);
errcnt += 1;
......@@ -166,7 +167,9 @@ static int verify_eraseblock(int ebnum)
errcnt += 1;
return err ? err : -1;
}
if (memcmp(readbuf + use_offset, writebuf, use_len)) {
if (memcmp(readbuf + use_offset,
writebuf + (use_len_max * i) + use_offset,
use_len)) {
pr_err("error: verify failed at %#llx\n",
(long long)addr);
errcnt += 1;
......@@ -566,8 +569,8 @@ static int __init mtd_oobtest_init(void)
if (bbt[i] || bbt[i + 1])
continue;
addr = (i + 1) * mtd->erasesize - mtd->writesize;
prandom_bytes_state(&rnd_state, writebuf, sz * cnt);
for (pg = 0; pg < cnt; ++pg) {
prandom_bytes_state(&rnd_state, writebuf, sz);
ops.mode = MTD_OPS_AUTO_OOB;
ops.len = 0;
ops.retlen = 0;
......@@ -575,7 +578,7 @@ static int __init mtd_oobtest_init(void)
ops.oobretlen = 0;
ops.ooboffs = 0;
ops.datbuf = NULL;
ops.oobbuf = writebuf;
ops.oobbuf = writebuf + pg * sz;
err = mtd_write_oob(mtd, addr, &ops);
if (err)
goto out;
......
......@@ -175,6 +175,11 @@ typedef enum {
#define NAND_OWN_BUFFERS 0x00020000
/* Chip may not exist, so silence any errors in scan */
#define NAND_SCAN_SILENT_NODEV 0x00040000
/*
* This option could be defined by controller drivers to protect against
* kmap'ed, vmalloc'ed highmem buffers being passed from upper layers
*/
#define NAND_USE_BOUNCE_BUFFER 0x00080000
/*
* Autodetect nand buswidth with readid/onfi.
* This suppose the driver will configure the hardware in 8 bits mode
......@@ -552,8 +557,7 @@ struct nand_buffers {
* @ecc: [BOARDSPECIFIC] ECC control structure
* @buffers: buffer structure for read/write
* @hwcontrol: platform-specific hardware control structure
* @erase_cmd: [INTERN] erase command write function, selectable due
* to AND support.
* @erase: [REPLACEABLE] erase function
* @scan_bbt: [REPLACEABLE] function to scan bad block table
* @chip_delay: [BOARDSPECIFIC] chip dependent delay for transferring
* data from array to read regs (tR).
......@@ -637,7 +641,7 @@ struct nand_chip {
void (*cmdfunc)(struct mtd_info *mtd, unsigned command, int column,
int page_addr);
int(*waitfunc)(struct mtd_info *mtd, struct nand_chip *this);
void (*erase_cmd)(struct mtd_info *mtd, int page);
int (*erase)(struct mtd_info *mtd, int page);
int (*scan_bbt)(struct mtd_info *mtd);
int (*errstat)(struct mtd_info *mtd, struct nand_chip *this, int state,
int status, int page);
......
......@@ -101,9 +101,6 @@ static inline void send_pfow_command(struct map_info *map,
unsigned long len, map_word *datum)
{
int bits_per_chip = map_bankwidth(map) * 8;
int chipnum;
struct lpddr_private *lpddr = map->fldrv_priv;
chipnum = adr >> lpddr->chipshift;
map_write(map, CMD(cmd_code), map->pfow_base + PFOW_COMMAND_CODE);
map_write(map, CMD(adr & ((1<<bits_per_chip) - 1)),
......
/*
* Copyright (C) 2014 Freescale Semiconductor, Inc.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#ifndef __LINUX_MTD_SPI_NOR_H
#define __LINUX_MTD_SPI_NOR_H
/*
* Note on opcode nomenclature: some opcodes have a format like
* SPINOR_OP_FUNCTION{4,}_x_y_z. The numbers x, y, and z stand for the number
* of I/O lines used for the opcode, address, and data (respectively). The
* FUNCTION has an optional suffix of '4', to represent an opcode which
* requires a 4-byte (32-bit) address.
*/
/* Flash opcodes. */
#define SPINOR_OP_WREN 0x06 /* Write enable */
#define SPINOR_OP_RDSR 0x05 /* Read status register */
#define SPINOR_OP_WRSR 0x01 /* Write status register 1 byte */
#define SPINOR_OP_READ 0x03 /* Read data bytes (low frequency) */
#define SPINOR_OP_READ_FAST 0x0b /* Read data bytes (high frequency) */
#define SPINOR_OP_READ_1_1_2 0x3b /* Read data bytes (Dual SPI) */
#define SPINOR_OP_READ_1_1_4 0x6b /* Read data bytes (Quad SPI) */
#define SPINOR_OP_PP 0x02 /* Page program (up to 256 bytes) */
#define SPINOR_OP_BE_4K 0x20 /* Erase 4KiB block */
#define SPINOR_OP_BE_4K_PMC 0xd7 /* Erase 4KiB block on PMC chips */
#define SPINOR_OP_BE_32K 0x52 /* Erase 32KiB block */
#define SPINOR_OP_CHIP_ERASE 0xc7 /* Erase whole flash chip */
#define SPINOR_OP_SE 0xd8 /* Sector erase (usually 64KiB) */
#define SPINOR_OP_RDID 0x9f /* Read JEDEC ID */
#define SPINOR_OP_RDCR 0x35 /* Read configuration register */
/* 4-byte address opcodes - used on Spansion and some Macronix flashes. */
#define SPINOR_OP_READ4 0x13 /* Read data bytes (low frequency) */
#define SPINOR_OP_READ4_FAST 0x0c /* Read data bytes (high frequency) */
#define SPINOR_OP_READ4_1_1_2 0x3c /* Read data bytes (Dual SPI) */
#define SPINOR_OP_READ4_1_1_4 0x6c /* Read data bytes (Quad SPI) */
#define SPINOR_OP_PP_4B 0x12 /* Page program (up to 256 bytes) */
#define SPINOR_OP_SE_4B 0xdc /* Sector erase (usually 64KiB) */
/* Used for SST flashes only. */
#define SPINOR_OP_BP 0x02 /* Byte program */
#define SPINOR_OP_WRDI 0x04 /* Write disable */
#define SPINOR_OP_AAI_WP 0xad /* Auto address increment word program */
/* Used for Macronix and Winbond flashes. */
#define SPINOR_OP_EN4B 0xb7 /* Enter 4-byte mode */
#define SPINOR_OP_EX4B 0xe9 /* Exit 4-byte mode */
/* Used for Spansion flashes only. */
#define SPINOR_OP_BRWR 0x17 /* Bank register write */
/* Status Register bits. */
#define SR_WIP 1 /* Write in progress */
#define SR_WEL 2 /* Write enable latch */
/* meaning of other SR_* bits may differ between vendors */
#define SR_BP0 4 /* Block protect 0 */
#define SR_BP1 8 /* Block protect 1 */
#define SR_BP2 0x10 /* Block protect 2 */
#define SR_SRWD 0x80 /* SR write protect */
#define SR_QUAD_EN_MX 0x40 /* Macronix Quad I/O */
/* Configuration Register bits. */
#define CR_QUAD_EN_SPAN 0x2 /* Spansion Quad I/O */
enum read_mode {
SPI_NOR_NORMAL = 0,
SPI_NOR_FAST,
SPI_NOR_DUAL,
SPI_NOR_QUAD,
};
/**
* struct spi_nor_xfer_cfg - Structure for defining a Serial Flash transfer
* @wren: command for "Write Enable", or 0x00 for not required
* @cmd: command for operation
* @cmd_pins: number of pins to send @cmd (1, 2, 4)
* @addr: address for operation
* @addr_pins: number of pins to send @addr (1, 2, 4)
* @addr_width: number of address bytes
* (3,4, or 0 for address not required)
* @mode: mode data
* @mode_pins: number of pins to send @mode (1, 2, 4)
* @mode_cycles: number of mode cycles (0 for mode not required)
* @dummy_cycles: number of dummy cycles (0 for dummy not required)
*/
struct spi_nor_xfer_cfg {
u8 wren;
u8 cmd;
u8 cmd_pins;
u32 addr;
u8 addr_pins;
u8 addr_width;
u8 mode;
u8 mode_pins;
u8 mode_cycles;
u8 dummy_cycles;
};
#define SPI_NOR_MAX_CMD_SIZE 8
enum spi_nor_ops {
SPI_NOR_OPS_READ = 0,
SPI_NOR_OPS_WRITE,
SPI_NOR_OPS_ERASE,
SPI_NOR_OPS_LOCK,
SPI_NOR_OPS_UNLOCK,
};
/**
* struct spi_nor - Structure for defining a the SPI NOR layer
* @mtd: point to a mtd_info structure
* @lock: the lock for the read/write/erase/lock/unlock operations
* @dev: point to a spi device, or a spi nor controller device.
* @page_size: the page size of the SPI NOR
* @addr_width: number of address bytes
* @erase_opcode: the opcode for erasing a sector
* @read_opcode: the read opcode
* @read_dummy: the dummy needed by the read operation
* @program_opcode: the program opcode
* @flash_read: the mode of the read
* @sst_write_second: used by the SST write operation
* @cfg: used by the read_xfer/write_xfer
* @cmd_buf: used by the write_reg
* @prepare: [OPTIONAL] do some preparations for the
* read/write/erase/lock/unlock operations
* @unprepare: [OPTIONAL] do some post work after the
* read/write/erase/lock/unlock operations
* @read_xfer: [OPTIONAL] the read fundamental primitive
* @write_xfer: [OPTIONAL] the writefundamental primitive
* @read_reg: [DRIVER-SPECIFIC] read out the register
* @write_reg: [DRIVER-SPECIFIC] write data to the register
* @read_id: [REPLACEABLE] read out the ID data, and find
* the proper spi_device_id
* @wait_till_ready: [REPLACEABLE] wait till the NOR becomes ready
* @read: [DRIVER-SPECIFIC] read data from the SPI NOR
* @write: [DRIVER-SPECIFIC] write data to the SPI NOR
* @erase: [DRIVER-SPECIFIC] erase a sector of the SPI NOR
* at the offset @offs
* @priv: the private data
*/
struct spi_nor {
struct mtd_info *mtd;
struct mutex lock;
struct device *dev;
u32 page_size;
u8 addr_width;
u8 erase_opcode;
u8 read_opcode;
u8 read_dummy;
u8 program_opcode;
enum read_mode flash_read;
bool sst_write_second;
struct spi_nor_xfer_cfg cfg;
u8 cmd_buf[SPI_NOR_MAX_CMD_SIZE];
int (*prepare)(struct spi_nor *nor, enum spi_nor_ops ops);
void (*unprepare)(struct spi_nor *nor, enum spi_nor_ops ops);
int (*read_xfer)(struct spi_nor *nor, struct spi_nor_xfer_cfg *cfg,
u8 *buf, size_t len);
int (*write_xfer)(struct spi_nor *nor, struct spi_nor_xfer_cfg *cfg,
u8 *buf, size_t len);
int (*read_reg)(struct spi_nor *nor, u8 opcode, u8 *buf, int len);
int (*write_reg)(struct spi_nor *nor, u8 opcode, u8 *buf, int len,
int write_enable);
const struct spi_device_id *(*read_id)(struct spi_nor *nor);
int (*wait_till_ready)(struct spi_nor *nor);
int (*read)(struct spi_nor *nor, loff_t from,
size_t len, size_t *retlen, u_char *read_buf);
void (*write)(struct spi_nor *nor, loff_t to,
size_t len, size_t *retlen, const u_char *write_buf);
int (*erase)(struct spi_nor *nor, loff_t offs);
void *priv;
};
/**
* spi_nor_scan() - scan the SPI NOR
* @nor: the spi_nor structure
* @id: the spi_device_id provided by the driver
* @mode: the read mode supported by the driver
*
* The drivers can use this fuction to scan the SPI NOR.
* In the scanning, it will try to get all the necessary information to
* fill the mtd_info{} and the spi_nor{}.
*
* The board may assigns a spi_device_id with @id which be used to compared with
* the spi_device_id detected by the scanning.
*
* Return: 0 for success, others for failure.
*/
int spi_nor_scan(struct spi_nor *nor, const struct spi_device_id *id,
enum read_mode mode);
extern const struct spi_device_id spi_nor_ids[];
/**
* spi_nor_match_id() - find the spi_device_id by the name
* @name: the name of the spi_device_id
*
* The drivers use this function to find the spi_device_id
* specified by the @name.
*
* Return: returns the right spi_device_id pointer on success,
* and returns NULL on failure.
*/
const struct spi_device_id *spi_nor_match_id(char *name);
#endif
......@@ -21,6 +21,7 @@
enum bch_ecc {
BCH4_ECC = 0,
BCH8_ECC,
BCH16_ECC,
};
/* ELM support 8 error syndrome process */
......@@ -38,7 +39,7 @@ struct elm_errorvec {
bool error_reported;
bool error_uncorrectable;
int error_count;
int error_loc[ERROR_VECTOR_MAX];
int error_loc[16];
};
void elm_decode_bch_error_page(struct device *dev, u8 *ecc_calc,
......
......@@ -31,6 +31,8 @@ enum omap_ecc {
OMAP_ECC_BCH8_CODE_HW_DETECTION_SW,
/* 8-bit ECC calculation by GPMC, Error detection by ELM */
OMAP_ECC_BCH8_CODE_HW,
/* 16-bit ECC calculation by GPMC, Error detection by ELM */
OMAP_ECC_BCH16_CODE_HW,
};
struct gpmc_nand_regs {
......@@ -50,6 +52,9 @@ struct gpmc_nand_regs {
void __iomem *gpmc_bch_result1[GPMC_BCH_NUM_REMAINDER];
void __iomem *gpmc_bch_result2[GPMC_BCH_NUM_REMAINDER];
void __iomem *gpmc_bch_result3[GPMC_BCH_NUM_REMAINDER];
void __iomem *gpmc_bch_result4[GPMC_BCH_NUM_REMAINDER];
void __iomem *gpmc_bch_result5[GPMC_BCH_NUM_REMAINDER];
void __iomem *gpmc_bch_result6[GPMC_BCH_NUM_REMAINDER];
};
struct omap_nand_platform_data {
......
......@@ -58,6 +58,9 @@ struct pxa3xx_nand_platform_data {
/* use an flash-based bad block table */
bool flash_bbt;
/* requested ECC strength and ECC step size */
int ecc_strength, ecc_step_size;
const struct mtd_partition *parts[NUM_CHIP_SELECT];
unsigned int nr_parts[NUM_CHIP_SELECT];
......
......@@ -109,6 +109,7 @@ struct mtd_write_req {
#define MTD_CAP_RAM (MTD_WRITEABLE | MTD_BIT_WRITEABLE | MTD_NO_ERASE)
#define MTD_CAP_NORFLASH (MTD_WRITEABLE | MTD_BIT_WRITEABLE)
#define MTD_CAP_NANDFLASH (MTD_WRITEABLE)
#define MTD_CAP_NVRAM (MTD_WRITEABLE | MTD_BIT_WRITEABLE | MTD_NO_ERASE)
/* Obsolete ECC byte placement modes (used with obsolete MEMGETOOBSEL) */
#define MTD_NANDECC_OFF 0 // Switch off ECC (Not recommended)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment