Commit affe8a2a authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'for-linus-20160801' of git://git.infradead.org/linux-mtd

Pull MTD updates from Brian Norris:
 "NAND:

    Quoting Boris:
     'This pull request contains only one notable change:
       - Addition of the MTK NAND controller driver

      And a bunch of specific NAND driver improvements/fixes. Here are the
      changes that are worth mentioning:
       - A few fixes/improvements for the xway NAND controller driver
       - A few fixes for the sunxi NAND controller driver
       - Support for DMA in the sunxi NAND driver
       - Support for the sunxi NAND controller IP embedded in A23/A33 SoCs
       - Addition for bitflips detection in erased pages to the brcmnand driver
       - Support for new brcmnand IPs
       - Update of the OMAP-GPMC binding to support DMA channel description'

    In addition, some small fixes around error handling, etc., as well
    as one long-standing corner case issue (2.6.20, I think?) with
    writing 1 byte less than a page.

  NOR:

   - rework some error handling on reads and writes, so we can better
     handle (for instance) SPI controllers which have limitations on
     their maximum transfer size

   - add new Cadence Quad SPI flash controller driver

   - add new Atmel QSPI flash controller driver

   - add new Hisilicon SPI flash controller driver

   - support a few new flash, and update supported features on others

   - fix the logic used for detecting a fully-unlocked flash

  And other miscellaneous small fixes"

* tag 'for-linus-20160801' of git://git.infradead.org/linux-mtd: (60 commits)
  mtd: spi-nor: don't build Cadence QuadSPI on non-ARM
  mtd: mtk-nor: remove duplicated include from mtk-quadspi.c
  mtd: nand: fix bug writing 1 byte less than page size
  mtd: update description of MTD_BCM47XXSFLASH symbol
  mtd: spi-nor: Add driver for Cadence Quad SPI Flash Controller
  mtd: spi-nor: Bindings for Cadence Quad SPI Flash Controller driver
  mtd: nand: brcmnand: Change BUG_ON in brcmnand_send_cmd
  mtd: pmcmsp-flash: Allocating too much in init_msp_flash()
  mtd: maps: sa1100-flash: potential NULL dereference
  mtd: atmel-quadspi: add driver for Atmel QSPI controller
  mtd: nand: omap2: fix return value check in omap_nand_probe()
  Documentation: atmel-quadspi: add binding file for Atmel QSPI driver
  mtd: spi-nor: add hisilicon spi-nor flash controller driver
  mtd: spi-nor: support dual, quad, and WP for Gigadevice
  mtd: spi-nor: Added support for n25q00a.
  memory: Update dependency of IFC for Layerscape
  mtd: nand: jz4780: Update MODULE_AUTHOR email address
  mtd: nand: sunxi: prevent a small memory leak
  mtd: nand: sunxi: add reset line support
  mtd: nand: sunxi: update DT bindings
  ...
parents 44cee85a 1dcff2e4
......@@ -46,6 +46,10 @@ Required properties:
0 maps to GPMC_WAIT0 pin.
- gpio-cells: Must be set to 2
Required properties when using NAND prefetch dma:
- dmas GPMC NAND prefetch dma channel
- dma-names Must be set to "rxtx"
Timing properties for child nodes. All are optional and default to 0.
- gpmc,sync-clk-ps: Minimum clock period for synchronous mode, in picoseconds
......@@ -137,7 +141,8 @@ Example for an AM33xx board:
ti,hwmods = "gpmc";
reg = <0x50000000 0x2000>;
interrupts = <100>;
dmas = <&edma 52 0>;
dma-names = "rxtx";
gpmc,num-cs = <8>;
gpmc,num-waitpins = <2>;
#address-cells = <2>;
......
* Atmel Quad Serial Peripheral Interface (QSPI)
Required properties:
- compatible: Should be "atmel,sama5d2-qspi".
- reg: Should contain the locations and lengths of the base registers
and the mapped memory.
- reg-names: Should contain the resource reg names:
- qspi_base: configuration register address space
- qspi_mmap: memory mapped address space
- interrupts: Should contain the interrupt for the device.
- clocks: The phandle of the clock needed by the QSPI controller.
- #address-cells: Should be <1>.
- #size-cells: Should be <0>.
Example:
spi@f0020000 {
compatible = "atmel,sama5d2-qspi";
reg = <0xf0020000 0x100>, <0xd0000000 0x8000000>;
reg-names = "qspi_base", "qspi_mmap";
interrupts = <52 IRQ_TYPE_LEVEL_HIGH 7>;
clocks = <&spi0_clk>;
#address-cells = <1>;
#size-cells = <0>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_spi0_default>;
status = "okay";
m25p80@0 {
...
};
};
......@@ -27,6 +27,7 @@ Required properties:
brcm,brcmnand-v6.2
brcm,brcmnand-v7.0
brcm,brcmnand-v7.1
brcm,brcmnand-v7.2
brcm,brcmnand
- reg : the register start and length for NAND register region.
(optional) Flash DMA register range (if present)
......
* Cadence Quad SPI controller
Required properties:
- compatible : Should be "cdns,qspi-nor".
- reg : Contains two entries, each of which is a tuple consisting of a
physical address and length. The first entry is the address and
length of the controller register set. The second entry is the
address and length of the QSPI Controller data area.
- interrupts : Unit interrupt specifier for the controller interrupt.
- clocks : phandle to the Quad SPI clock.
- cdns,fifo-depth : Size of the data FIFO in words.
- cdns,fifo-width : Bus width of the data FIFO in bytes.
- cdns,trigger-address : 32-bit indirect AHB trigger address.
Optional properties:
- cdns,is-decoded-cs : Flag to indicate whether decoder is used or not.
Optional subnodes:
Subnodes of the Cadence Quad SPI controller are spi slave nodes with additional
custom properties:
- cdns,read-delay : Delay for read capture logic, in clock cycles
- cdns,tshsl-ns : Delay in nanoseconds for the length that the master
mode chip select outputs are de-asserted between
transactions.
- cdns,tsd2d-ns : Delay in nanoseconds between one chip select being
de-activated and the activation of another.
- cdns,tchsh-ns : Delay in nanoseconds between last bit of current
transaction and deasserting the device chip select
(qspi_n_ss_out).
- cdns,tslch-ns : Delay in nanoseconds between setting qspi_n_ss_out low
and first bit transfer.
Example:
qspi: spi@ff705000 {
compatible = "cdns,qspi-nor";
#address-cells = <1>;
#size-cells = <0>;
reg = <0xff705000 0x1000>,
<0xffa00000 0x1000>;
interrupts = <0 151 4>;
clocks = <&qspi_clk>;
cdns,is-decoded-cs;
cdns,fifo-depth = <128>;
cdns,fifo-width = <4>;
cdns,trigger-address = <0x00000000>;
flash0: n25q00@0 {
...
cdns,read-delay = <4>;
cdns,tshsl-ns = <50>;
cdns,tsd2d-ns = <50>;
cdns,tchsh-ns = <4>;
cdns,tslch-ns = <4>;
};
};
......@@ -39,7 +39,7 @@ Optional properties:
"prefetch-polled" Prefetch polled mode (default)
"polled" Polled mode, without prefetch
"prefetch-dma" Prefetch enabled sDMA mode
"prefetch-dma" Prefetch enabled DMA mode
"prefetch-irq" Prefetch enabled irq mode
- elm_id: <deprecated> use "ti,elm-id" instead
......
HiSilicon SPI-NOR Flash Controller
Required properties:
- compatible : Should be "hisilicon,fmc-spi-nor" and one of the following strings:
"hisilicon,hi3519-spi-nor"
- address-cells : Should be 1.
- size-cells : Should be 0.
- reg : Offset and length of the register set for the controller device.
- reg-names : Must include the following two entries: "control", "memory".
- clocks : handle to spi-nor flash controller clock.
Example:
spi-nor-controller@10000000 {
compatible = "hisilicon,hi3519-spi-nor", "hisilicon,fmc-spi-nor";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x10000000 0x1000>, <0x14000000 0x1000000>;
reg-names = "control", "memory";
clocks = <&clock HI3519_FMC_CLK>;
spi-nor@0 {
compatible = "jedec,spi-nor";
reg = <0>;
};
};
MTK SoCs NAND FLASH controller (NFC) DT binding
This file documents the device tree bindings for MTK SoCs NAND controllers.
The functional split of the controller requires two drivers to operate:
the nand controller interface driver and the ECC engine driver.
The hardware description for both devices must be captured as device
tree nodes.
1) NFC NAND Controller Interface (NFI):
=======================================
The first part of NFC is NAND Controller Interface (NFI) HW.
Required NFI properties:
- compatible: Should be "mediatek,mtxxxx-nfc".
- reg: Base physical address and size of NFI.
- interrupts: Interrupts of NFI.
- clocks: NFI required clocks.
- clock-names: NFI clocks internal name.
- status: Disabled default. Then set "okay" by platform.
- ecc-engine: Required ECC Engine node.
- #address-cells: NAND chip index, should be 1.
- #size-cells: Should be 0.
Example:
nandc: nfi@1100d000 {
compatible = "mediatek,mt2701-nfc";
reg = <0 0x1100d000 0 0x1000>;
interrupts = <GIC_SPI 56 IRQ_TYPE_LEVEL_LOW>;
clocks = <&pericfg CLK_PERI_NFI>,
<&pericfg CLK_PERI_NFI_PAD>;
clock-names = "nfi_clk", "pad_clk";
status = "disabled";
ecc-engine = <&bch>;
#address-cells = <1>;
#size-cells = <0>;
};
Platform related properties, should be set in {platform_name}.dts:
- children nodes: NAND chips.
Children nodes properties:
- reg: Chip Select Signal, default 0.
Set as reg = <0>, <1> when need 2 CS.
Optional:
- nand-on-flash-bbt: Store BBT on NAND Flash.
- nand-ecc-mode: the NAND ecc mode (check driver for supported modes)
- nand-ecc-step-size: Number of data bytes covered by a single ECC step.
valid values: 512 and 1024.
1024 is recommended for large page NANDs.
- nand-ecc-strength: Number of bits to correct per ECC step.
The valid values that the controller supports are: 4, 6,
8, 10, 12, 14, 16, 18, 20, 22, 24, 28, 32, 36, 40, 44,
48, 52, 56, 60.
The strength should be calculated as follows:
E = (S - F) * 8 / 14
S = O / (P / Q)
E : nand-ecc-strength.
S : spare size per sector.
F : FDM size, should be in the range [1,8].
It is used to store free oob data.
O : oob size.
P : page size.
Q : nand-ecc-step-size.
If the result does not match any one of the listed
choices above, please select the smaller valid value from
the list.
(otherwise the driver will do the adjustment at runtime)
- pinctrl-names: Default NAND pin GPIO setting name.
- pinctrl-0: GPIO setting node.
Example:
&pio {
nand_pins_default: nanddefault {
pins_dat {
pinmux = <MT2701_PIN_111_MSDC0_DAT7__FUNC_NLD7>,
<MT2701_PIN_112_MSDC0_DAT6__FUNC_NLD6>,
<MT2701_PIN_114_MSDC0_DAT4__FUNC_NLD4>,
<MT2701_PIN_118_MSDC0_DAT3__FUNC_NLD3>,
<MT2701_PIN_121_MSDC0_DAT0__FUNC_NLD0>,
<MT2701_PIN_120_MSDC0_DAT1__FUNC_NLD1>,
<MT2701_PIN_113_MSDC0_DAT5__FUNC_NLD5>,
<MT2701_PIN_115_MSDC0_RSTB__FUNC_NLD8>,
<MT2701_PIN_119_MSDC0_DAT2__FUNC_NLD2>;
input-enable;
drive-strength = <MTK_DRIVE_8mA>;
bias-pull-up;
};
pins_we {
pinmux = <MT2701_PIN_117_MSDC0_CLK__FUNC_NWEB>;
drive-strength = <MTK_DRIVE_8mA>;
bias-pull-up = <MTK_PUPD_SET_R1R0_10>;
};
pins_ale {
pinmux = <MT2701_PIN_116_MSDC0_CMD__FUNC_NALE>;
drive-strength = <MTK_DRIVE_8mA>;
bias-pull-down = <MTK_PUPD_SET_R1R0_10>;
};
};
};
&nandc {
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&nand_pins_default>;
nand@0 {
reg = <0>;
nand-on-flash-bbt;
nand-ecc-mode = "hw";
nand-ecc-strength = <24>;
nand-ecc-step-size = <1024>;
};
};
NAND chip optional subnodes:
- Partitions, see Documentation/devicetree/bindings/mtd/partition.txt
Example:
nand@0 {
partitions {
compatible = "fixed-partitions";
#address-cells = <1>;
#size-cells = <1>;
preloader@0 {
label = "pl";
read-only;
reg = <0x00000000 0x00400000>;
};
android@0x00400000 {
label = "android";
reg = <0x00400000 0x12c00000>;
};
};
};
2) ECC Engine:
==============
Required BCH properties:
- compatible: Should be "mediatek,mtxxxx-ecc".
- reg: Base physical address and size of ECC.
- interrupts: Interrupts of ECC.
- clocks: ECC required clocks.
- clock-names: ECC clocks internal name.
- status: Disabled default. Then set "okay" by platform.
Example:
bch: ecc@1100e000 {
compatible = "mediatek,mt2701-ecc";
reg = <0 0x1100e000 0 0x1000>;
interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_LOW>;
clocks = <&pericfg CLK_PERI_NFI_ECC>;
clock-names = "nfiecc_clk";
status = "disabled";
};
......@@ -11,10 +11,16 @@ Required properties:
* "ahb" : AHB gating clock
* "mod" : nand controller clock
Optional properties:
- dmas : shall reference DMA channel associated to the NAND controller.
- dma-names : shall be "rxtx".
Optional children nodes:
Children nodes represent the available nand chips.
Optional properties:
- reset : phandle + reset specifier pair
- reset-names : must contain "ahb"
- allwinner,rb : shall contain the native Ready/Busy ids.
or
- rb-gpios : shall contain the gpios used as R/B pins.
......
......@@ -397,7 +397,7 @@ static int __init init_axis_flash(void)
if (!romfs_in_flash) {
/* Create an RAM device for the root partition (romfs). */
#if !defined(CONFIG_MTD_MTDRAM) || (CONFIG_MTDRAM_TOTAL_SIZE != 0) || (CONFIG_MTDRAM_ABS_POS != 0)
#if !defined(CONFIG_MTD_MTDRAM) || (CONFIG_MTDRAM_TOTAL_SIZE != 0)
/* No use trying to boot this kernel from RAM. Panic! */
printk(KERN_EMERG "axisflashmap: Cannot create an MTD RAM "
"device due to kernel (mis)configuration!\n");
......
......@@ -320,7 +320,7 @@ static int __init init_axis_flash(void)
* but its size must be configured as 0 so as not to conflict
* with our usage.
*/
#if !defined(CONFIG_MTD_MTDRAM) || (CONFIG_MTDRAM_TOTAL_SIZE != 0) || (CONFIG_MTDRAM_ABS_POS != 0)
#if !defined(CONFIG_MTD_MTDRAM) || (CONFIG_MTDRAM_TOTAL_SIZE != 0)
if (!romfs_in_flash && !nand_boot) {
printk(KERN_EMERG "axisflashmap: Cannot create an MTD RAM "
"device; configure CONFIG_MTD_MTDRAM with size = 0!\n");
......
......@@ -115,7 +115,7 @@ config FSL_CORENET_CF
config FSL_IFC
bool
depends on FSL_SOC
depends on FSL_SOC || ARCH_LAYERSCAPE
config JZ4780_NEMC
bool "Ingenic JZ4780 SoC NEMC driver"
......
......@@ -31,7 +31,9 @@
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/fsl_ifc.h>
#include <asm/prom.h>
#include <linux/irqdomain.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
struct fsl_ifc_ctrl *fsl_ifc_ctrl_dev;
EXPORT_SYMBOL(fsl_ifc_ctrl_dev);
......
......@@ -416,7 +416,7 @@ static int cfi_staa_read (struct mtd_info *mtd, loff_t from, size_t len, size_t
return ret;
}
static inline int do_write_buffer(struct map_info *map, struct flchip *chip,
static int do_write_buffer(struct map_info *map, struct flchip *chip,
unsigned long adr, const u_char *buf, int len)
{
struct cfi_private *cfi = map->fldrv_priv;
......
......@@ -113,12 +113,12 @@ config MTD_SST25L
if you want to specify device partitioning.
config MTD_BCM47XXSFLASH
tristate "R/O support for serial flash on BCMA bus"
tristate "Support for serial flash on BCMA bus"
depends on BCMA_SFLASH && (MIPS || ARM)
help
BCMA bus can have various flash memories attached, they are
registered by bcma as platform devices. This enables driver for
serial flash memories (only read-only mode is implemented).
serial flash memories.
config MTD_SLRAM
tristate "Uncached system RAM"
......@@ -171,18 +171,6 @@ config MTDRAM_ERASE_SIZE
as a module, it is also possible to specify this as a parameter when
loading the module.
#If not a module (I don't want to test it as a module)
config MTDRAM_ABS_POS
hex "SRAM Hexadecimal Absolute position or 0"
depends on MTD_MTDRAM=y
default "0"
help
If you have system RAM accessible by the CPU but not used by Linux
in normal operation, you can give the physical address at which the
available RAM starts, and the MTDRAM driver will use it instead of
allocating space from Linux's available memory. Otherwise, leave
this set to zero. Most people will want to leave this as zero.
config MTD_BLOCK2MTD
tristate "MTD using block device"
depends on BLOCK
......
......@@ -73,14 +73,15 @@ static int m25p80_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len)
return spi_write(spi, flash->command, len + 1);
}
static void m25p80_write(struct spi_nor *nor, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
static ssize_t m25p80_write(struct spi_nor *nor, loff_t to, size_t len,
const u_char *buf)
{
struct m25p *flash = nor->priv;
struct spi_device *spi = flash->spi;
struct spi_transfer t[2] = {};
struct spi_message m;
int cmd_sz = m25p_cmdsz(nor);
ssize_t ret;
spi_message_init(&m);
......@@ -98,9 +99,14 @@ static void m25p80_write(struct spi_nor *nor, loff_t to, size_t len,
t[1].len = len;
spi_message_add_tail(&t[1], &m);
spi_sync(spi, &m);
ret = spi_sync(spi, &m);
if (ret)
return ret;
*retlen += m.actual_length - cmd_sz;
ret = m.actual_length - cmd_sz;
if (ret < 0)
return -EIO;
return ret;
}
static inline unsigned int m25p80_rx_nbits(struct spi_nor *nor)
......@@ -119,21 +125,21 @@ static inline unsigned int m25p80_rx_nbits(struct spi_nor *nor)
* Read an address range from the nor chip. The address range
* may be any size provided it is within the physical boundaries.
*/
static int m25p80_read(struct spi_nor *nor, loff_t from, size_t len,
size_t *retlen, u_char *buf)
static ssize_t m25p80_read(struct spi_nor *nor, loff_t from, size_t len,
u_char *buf)
{
struct m25p *flash = nor->priv;
struct spi_device *spi = flash->spi;
struct spi_transfer t[2];
struct spi_message m;
unsigned int dummy = nor->read_dummy;
ssize_t ret;
/* convert the dummy cycles to the number of bytes */
dummy /= 8;
if (spi_flash_read_supported(spi)) {
struct spi_flash_read_message msg;
int ret;
memset(&msg, 0, sizeof(msg));
......@@ -149,8 +155,9 @@ static int m25p80_read(struct spi_nor *nor, loff_t from, size_t len,
msg.data_nbits = m25p80_rx_nbits(nor);
ret = spi_flash_read(spi, &msg);
*retlen = msg.retlen;
return ret;
if (ret < 0)
return ret;
return msg.retlen;
}
spi_message_init(&m);
......@@ -165,13 +172,17 @@ static int m25p80_read(struct spi_nor *nor, loff_t from, size_t len,
t[1].rx_buf = buf;
t[1].rx_nbits = m25p80_rx_nbits(nor);
t[1].len = len;
t[1].len = min(len, spi_max_transfer_size(spi));
spi_message_add_tail(&t[1], &m);
spi_sync(spi, &m);
ret = spi_sync(spi, &m);
if (ret)
return ret;
*retlen = m.actual_length - m25p_cmdsz(nor) - dummy;
return 0;
ret = m.actual_length - m25p_cmdsz(nor) - dummy;
if (ret < 0)
return -EIO;
return ret;
}
/*
......
......@@ -186,7 +186,7 @@ static int of_flash_probe(struct platform_device *dev)
* consists internally of 2 non-identical NOR chips on one die.
*/
p = of_get_property(dp, "reg", &count);
if (count % reg_tuple_size != 0) {
if (!p || count % reg_tuple_size != 0) {
dev_err(&dev->dev, "Malformed reg property on %s\n",
dev->dev.of_node->full_name);
err = -EINVAL;
......
......@@ -75,15 +75,15 @@ static int __init init_msp_flash(void)
printk(KERN_NOTICE "Found %d PMC flash devices\n", fcnt);
msp_flash = kmalloc(fcnt * sizeof(struct map_info *), GFP_KERNEL);
msp_flash = kcalloc(fcnt, sizeof(*msp_flash), GFP_KERNEL);
if (!msp_flash)
return -ENOMEM;
msp_parts = kmalloc(fcnt * sizeof(struct mtd_partition *), GFP_KERNEL);
msp_parts = kcalloc(fcnt, sizeof(*msp_parts), GFP_KERNEL);
if (!msp_parts)
goto free_msp_flash;
msp_maps = kcalloc(fcnt, sizeof(struct mtd_info), GFP_KERNEL);
msp_maps = kcalloc(fcnt, sizeof(*msp_maps), GFP_KERNEL);
if (!msp_maps)
goto free_msp_parts;
......
......@@ -230,8 +230,10 @@ static struct sa_info *sa1100_setup_mtd(struct platform_device *pdev,
info->mtd = mtd_concat_create(cdev, info->num_subdev,
plat->name);
if (info->mtd == NULL)
if (info->mtd == NULL) {
ret = -ENXIO;
goto err;
}
}
info->mtd->dev.parent = &pdev->dev;
......
......@@ -438,7 +438,7 @@ config MTD_NAND_FSL_ELBC
config MTD_NAND_FSL_IFC
tristate "NAND support for Freescale IFC controller"
depends on MTD_NAND && FSL_SOC
depends on MTD_NAND && (FSL_SOC || ARCH_LAYERSCAPE)
select FSL_IFC
select MEMORY
help
......@@ -539,7 +539,6 @@ config MTD_NAND_FSMC
config MTD_NAND_XWAY
tristate "Support for NAND on Lantiq XWAY SoC"
depends on LANTIQ && SOC_TYPE_XWAY
select MTD_NAND_PLATFORM
help
Enables support for NAND Flash chips on Lantiq XWAY SoCs. NAND is attached
to the External Bus Unit (EBU).
......@@ -563,4 +562,11 @@ config MTD_NAND_QCOM
Enables support for NAND flash chips on SoCs containing the EBI2 NAND
controller. This controller is found on IPQ806x SoC.
config MTD_NAND_MTK
tristate "Support for NAND controller on MTK SoCs"
depends on HAS_DMA
help
Enables support for NAND controller on MTK SoCs.
This controller is found on mt27xx, mt81xx, mt65xx SoCs.
endif # MTD_NAND
......@@ -57,5 +57,6 @@ obj-$(CONFIG_MTD_NAND_SUNXI) += sunxi_nand.o
obj-$(CONFIG_MTD_NAND_HISI504) += hisi504_nand.o
obj-$(CONFIG_MTD_NAND_BRCMNAND) += brcmnand/
obj-$(CONFIG_MTD_NAND_QCOM) += qcom_nandc.o
obj-$(CONFIG_MTD_NAND_MTK) += mtk_nand.o mtk_ecc.o
nand-objs := nand_base.o nand_bbt.o nand_timings.o
......@@ -340,6 +340,36 @@ static const u16 brcmnand_regs_v71[] = {
[BRCMNAND_FC_BASE] = 0x400,
};
/* BRCMNAND v7.2 */
static const u16 brcmnand_regs_v72[] = {
[BRCMNAND_CMD_START] = 0x04,
[BRCMNAND_CMD_EXT_ADDRESS] = 0x08,
[BRCMNAND_CMD_ADDRESS] = 0x0c,
[BRCMNAND_INTFC_STATUS] = 0x14,
[BRCMNAND_CS_SELECT] = 0x18,
[BRCMNAND_CS_XOR] = 0x1c,
[BRCMNAND_LL_OP] = 0x20,
[BRCMNAND_CS0_BASE] = 0x50,
[BRCMNAND_CS1_BASE] = 0,
[BRCMNAND_CORR_THRESHOLD] = 0xdc,
[BRCMNAND_CORR_THRESHOLD_EXT] = 0xe0,
[BRCMNAND_UNCORR_COUNT] = 0xfc,
[BRCMNAND_CORR_COUNT] = 0x100,
[BRCMNAND_CORR_EXT_ADDR] = 0x10c,
[BRCMNAND_CORR_ADDR] = 0x110,
[BRCMNAND_UNCORR_EXT_ADDR] = 0x114,
[BRCMNAND_UNCORR_ADDR] = 0x118,
[BRCMNAND_SEMAPHORE] = 0x150,
[BRCMNAND_ID] = 0x194,
[BRCMNAND_ID_EXT] = 0x198,
[BRCMNAND_LL_RDATA] = 0x19c,
[BRCMNAND_OOB_READ_BASE] = 0x200,
[BRCMNAND_OOB_READ_10_BASE] = 0,
[BRCMNAND_OOB_WRITE_BASE] = 0x400,
[BRCMNAND_OOB_WRITE_10_BASE] = 0,
[BRCMNAND_FC_BASE] = 0x600,
};
enum brcmnand_cs_reg {
BRCMNAND_CS_CFG_EXT = 0,
BRCMNAND_CS_CFG,
......@@ -435,7 +465,9 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
}
/* Register offsets */
if (ctrl->nand_version >= 0x0701)
if (ctrl->nand_version >= 0x0702)
ctrl->reg_offsets = brcmnand_regs_v72;
else if (ctrl->nand_version >= 0x0701)
ctrl->reg_offsets = brcmnand_regs_v71;
else if (ctrl->nand_version >= 0x0600)
ctrl->reg_offsets = brcmnand_regs_v60;
......@@ -480,7 +512,9 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
}
/* Maximum spare area sector size (per 512B) */
if (ctrl->nand_version >= 0x0600)
if (ctrl->nand_version >= 0x0702)
ctrl->max_oob = 128;
else if (ctrl->nand_version >= 0x0600)
ctrl->max_oob = 64;
else if (ctrl->nand_version >= 0x0500)
ctrl->max_oob = 32;
......@@ -583,14 +617,20 @@ static void brcmnand_wr_corr_thresh(struct brcmnand_host *host, u8 val)
enum brcmnand_reg reg = BRCMNAND_CORR_THRESHOLD;
int cs = host->cs;
if (ctrl->nand_version >= 0x0600)
if (ctrl->nand_version >= 0x0702)
bits = 7;
else if (ctrl->nand_version >= 0x0600)
bits = 6;
else if (ctrl->nand_version >= 0x0500)
bits = 5;
else
bits = 4;
if (ctrl->nand_version >= 0x0600) {
if (ctrl->nand_version >= 0x0702) {
if (cs >= 4)
reg = BRCMNAND_CORR_THRESHOLD_EXT;
shift = (cs % 4) * bits;
} else if (ctrl->nand_version >= 0x0600) {
if (cs >= 5)
reg = BRCMNAND_CORR_THRESHOLD_EXT;
shift = (cs % 5) * bits;
......@@ -631,19 +671,28 @@ enum {
static inline u32 brcmnand_spare_area_mask(struct brcmnand_controller *ctrl)
{
if (ctrl->nand_version >= 0x0600)
if (ctrl->nand_version >= 0x0702)
return GENMASK(7, 0);
else if (ctrl->nand_version >= 0x0600)
return GENMASK(6, 0);
else
return GENMASK(5, 0);
}
#define NAND_ACC_CONTROL_ECC_SHIFT 16
#define NAND_ACC_CONTROL_ECC_EXT_SHIFT 13
static inline u32 brcmnand_ecc_level_mask(struct brcmnand_controller *ctrl)
{
u32 mask = (ctrl->nand_version >= 0x0600) ? 0x1f : 0x0f;
return mask << NAND_ACC_CONTROL_ECC_SHIFT;
mask <<= NAND_ACC_CONTROL_ECC_SHIFT;
/* v7.2 includes additional ECC levels */
if (ctrl->nand_version >= 0x0702)
mask |= 0x7 << NAND_ACC_CONTROL_ECC_EXT_SHIFT;
return mask;
}
static void brcmnand_set_ecc_enabled(struct brcmnand_host *host, int en)
......@@ -667,7 +716,9 @@ static void brcmnand_set_ecc_enabled(struct brcmnand_host *host, int en)
static inline int brcmnand_sector_1k_shift(struct brcmnand_controller *ctrl)
{
if (ctrl->nand_version >= 0x0600)
if (ctrl->nand_version >= 0x0702)
return 9;
else if (ctrl->nand_version >= 0x0600)
return 7;
else if (ctrl->nand_version >= 0x0500)
return 6;
......@@ -773,10 +824,16 @@ enum brcmnand_llop_type {
* Internal support functions
***********************************************************************/
static inline bool is_hamming_ecc(struct brcmnand_cfg *cfg)
static inline bool is_hamming_ecc(struct brcmnand_controller *ctrl,
struct brcmnand_cfg *cfg)
{
return cfg->sector_size_1k == 0 && cfg->spare_area_size == 16 &&
cfg->ecc_level == 15;
if (ctrl->nand_version <= 0x0701)
return cfg->sector_size_1k == 0 && cfg->spare_area_size == 16 &&
cfg->ecc_level == 15;
else
return cfg->sector_size_1k == 0 && ((cfg->spare_area_size == 16 &&
cfg->ecc_level == 15) ||
(cfg->spare_area_size == 28 && cfg->ecc_level == 16));
}
/*
......@@ -931,7 +988,7 @@ static int brcmstb_choose_ecc_layout(struct brcmnand_host *host)
if (p->sector_size_1k)
ecc_level <<= 1;
if (is_hamming_ecc(p)) {
if (is_hamming_ecc(host->ctrl, p)) {
ecc->bytes = 3 * sectors;
mtd_set_ooblayout(mtd, &brcmnand_hamming_ooblayout_ops);
return 0;
......@@ -1108,7 +1165,7 @@ static void brcmnand_send_cmd(struct brcmnand_host *host, int cmd)
ctrl->cmd_pending = cmd;
intfc = brcmnand_read_reg(ctrl, BRCMNAND_INTFC_STATUS);
BUG_ON(!(intfc & INTFC_CTLR_READY));
WARN_ON(!(intfc & INTFC_CTLR_READY));
mb(); /* flush previous writes */
brcmnand_write_reg(ctrl, BRCMNAND_CMD_START,
......@@ -1545,6 +1602,56 @@ static int brcmnand_read_by_pio(struct mtd_info *mtd, struct nand_chip *chip,
return ret;
}
/*
* Check a page to see if it is erased (w/ bitflips) after an uncorrectable ECC
* error
*
* Because the HW ECC signals an ECC error if an erase paged has even a single
* bitflip, we must check each ECC error to see if it is actually an erased
* page with bitflips, not a truly corrupted page.
*
* On a real error, return a negative error code (-EBADMSG for ECC error), and
* buf will contain raw data.
* Otherwise, buf gets filled with 0xffs and return the maximum number of
* bitflips-per-ECC-sector to the caller.
*
*/
static int brcmstb_nand_verify_erased_page(struct mtd_info *mtd,
struct nand_chip *chip, void *buf, u64 addr)
{
int i, sas;
void *oob = chip->oob_poi;
int bitflips = 0;
int page = addr >> chip->page_shift;
int ret;
if (!buf) {
buf = chip->buffers->databuf;
/* Invalidate page cache */
chip->pagebuf = -1;
}
sas = mtd->oobsize / chip->ecc.steps;
/* read without ecc for verification */
chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page);
ret = chip->ecc.read_page_raw(mtd, chip, buf, true, page);
if (ret)
return ret;
for (i = 0; i < chip->ecc.steps; i++, oob += sas) {
ret = nand_check_erased_ecc_chunk(buf, chip->ecc.size,
oob, sas, NULL, 0,
chip->ecc.strength);
if (ret < 0)
return ret;
bitflips = max(bitflips, ret);
}
return bitflips;
}
static int brcmnand_read(struct mtd_info *mtd, struct nand_chip *chip,
u64 addr, unsigned int trans, u32 *buf, u8 *oob)
{
......@@ -1552,9 +1659,11 @@ static int brcmnand_read(struct mtd_info *mtd, struct nand_chip *chip,
struct brcmnand_controller *ctrl = host->ctrl;
u64 err_addr = 0;
int err;
bool retry = true;
dev_dbg(ctrl->dev, "read %llx -> %p\n", (unsigned long long)addr, buf);
try_dmaread:
brcmnand_write_reg(ctrl, BRCMNAND_UNCORR_COUNT, 0);
if (has_flash_dma(ctrl) && !oob && flash_dma_buf_ok(buf)) {
......@@ -1575,6 +1684,34 @@ static int brcmnand_read(struct mtd_info *mtd, struct nand_chip *chip,
}
if (mtd_is_eccerr(err)) {
/*
* On controller version and 7.0, 7.1 , DMA read after a
* prior PIO read that reported uncorrectable error,
* the DMA engine captures this error following DMA read
* cleared only on subsequent DMA read, so just retry once
* to clear a possible false error reported for current DMA
* read
*/
if ((ctrl->nand_version == 0x0700) ||
(ctrl->nand_version == 0x0701)) {
if (retry) {
retry = false;
goto try_dmaread;
}
}
/*
* Controller version 7.2 has hw encoder to detect erased page
* bitflips, apply sw verification for older controllers only
*/
if (ctrl->nand_version < 0x0702) {
err = brcmstb_nand_verify_erased_page(mtd, chip, buf,
addr);
/* erased page bitflips corrected */
if (err > 0)
return err;
}
dev_dbg(ctrl->dev, "uncorrectable error at 0x%llx\n",
(unsigned long long)err_addr);
mtd->ecc_stats.failed++;
......@@ -1857,7 +1994,8 @@ static int brcmnand_set_cfg(struct brcmnand_host *host,
return 0;
}
static void brcmnand_print_cfg(char *buf, struct brcmnand_cfg *cfg)
static void brcmnand_print_cfg(struct brcmnand_host *host,
char *buf, struct brcmnand_cfg *cfg)
{
buf += sprintf(buf,
"%lluMiB total, %uKiB blocks, %u%s pages, %uB OOB, %u-bit",
......@@ -1868,7 +2006,7 @@ static void brcmnand_print_cfg(char *buf, struct brcmnand_cfg *cfg)
cfg->spare_area_size, cfg->device_width);
/* Account for Hamming ECC and for BCH 512B vs 1KiB sectors */
if (is_hamming_ecc(cfg))
if (is_hamming_ecc(host->ctrl, cfg))
sprintf(buf, ", Hamming ECC");
else if (cfg->sector_size_1k)
sprintf(buf, ", BCH-%u (1KiB sector)", cfg->ecc_level << 1);
......@@ -1987,7 +2125,7 @@ static int brcmnand_setup_dev(struct brcmnand_host *host)
brcmnand_set_ecc_enabled(host, 1);
brcmnand_print_cfg(msg, cfg);
brcmnand_print_cfg(host, msg, cfg);
dev_info(ctrl->dev, "detected %s\n", msg);
/* Configure ACC_CONTROL */
......@@ -1995,6 +2133,10 @@ static int brcmnand_setup_dev(struct brcmnand_host *host)
tmp = nand_readreg(ctrl, offs);
tmp &= ~ACC_CONTROL_PARTIAL_PAGE;
tmp &= ~ACC_CONTROL_RD_ERASED;
/* We need to turn on Read from erased paged protected by ECC */
if (ctrl->nand_version >= 0x0702)
tmp |= ACC_CONTROL_RD_ERASED;
tmp &= ~ACC_CONTROL_FAST_PGM_RDIN;
if (ctrl->features & BRCMNAND_HAS_PREFETCH) {
/*
......@@ -2195,6 +2337,7 @@ static const struct of_device_id brcmnand_of_match[] = {
{ .compatible = "brcm,brcmnand-v6.2" },
{ .compatible = "brcm,brcmnand-v7.0" },
{ .compatible = "brcm,brcmnand-v7.1" },
{ .compatible = "brcm,brcmnand-v7.2" },
{},
};
MODULE_DEVICE_TABLE(of, brcmnand_of_match);
......
......@@ -375,6 +375,6 @@ static struct platform_driver jz4780_bch_driver = {
module_platform_driver(jz4780_bch_driver);
MODULE_AUTHOR("Alex Smith <alex@alex-smith.me.uk>");
MODULE_AUTHOR("Harvey Hunt <harvey.hunt@imgtec.com>");
MODULE_AUTHOR("Harvey Hunt <harveyhuntnexus@gmail.com>");
MODULE_DESCRIPTION("Ingenic JZ4780 BCH error correction driver");
MODULE_LICENSE("GPL v2");
......@@ -412,6 +412,6 @@ static struct platform_driver jz4780_nand_driver = {
module_platform_driver(jz4780_nand_driver);
MODULE_AUTHOR("Alex Smith <alex@alex-smith.me.uk>");
MODULE_AUTHOR("Harvey Hunt <harvey.hunt@imgtec.com>");
MODULE_AUTHOR("Harvey Hunt <harveyhuntnexus@gmail.com>");
MODULE_DESCRIPTION("Ingenic JZ4780 NAND driver");
MODULE_LICENSE("GPL v2");
/*
* MTK ECC controller driver.
* Copyright (C) 2016 MediaTek Inc.
* Authors: Xiaolei Li <xiaolei.li@mediatek.com>
* Jorge Ramirez-Ortiz <jorge.ramirez-ortiz@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/platform_device.h>
#include <linux/dma-mapping.h>
#include <linux/interrupt.h>
#include <linux/clk.h>
#include <linux/module.h>
#include <linux/iopoll.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/mutex.h>
#include "mtk_ecc.h"
#define ECC_IDLE_MASK BIT(0)
#define ECC_IRQ_EN BIT(0)
#define ECC_OP_ENABLE (1)
#define ECC_OP_DISABLE (0)
#define ECC_ENCCON (0x00)
#define ECC_ENCCNFG (0x04)
#define ECC_CNFG_4BIT (0)
#define ECC_CNFG_6BIT (1)
#define ECC_CNFG_8BIT (2)
#define ECC_CNFG_10BIT (3)
#define ECC_CNFG_12BIT (4)
#define ECC_CNFG_14BIT (5)
#define ECC_CNFG_16BIT (6)
#define ECC_CNFG_18BIT (7)
#define ECC_CNFG_20BIT (8)
#define ECC_CNFG_22BIT (9)
#define ECC_CNFG_24BIT (0xa)
#define ECC_CNFG_28BIT (0xb)
#define ECC_CNFG_32BIT (0xc)
#define ECC_CNFG_36BIT (0xd)
#define ECC_CNFG_40BIT (0xe)
#define ECC_CNFG_44BIT (0xf)
#define ECC_CNFG_48BIT (0x10)
#define ECC_CNFG_52BIT (0x11)
#define ECC_CNFG_56BIT (0x12)
#define ECC_CNFG_60BIT (0x13)
#define ECC_MODE_SHIFT (5)
#define ECC_MS_SHIFT (16)
#define ECC_ENCDIADDR (0x08)
#define ECC_ENCIDLE (0x0C)
#define ECC_ENCPAR(x) (0x10 + (x) * sizeof(u32))
#define ECC_ENCIRQ_EN (0x80)
#define ECC_ENCIRQ_STA (0x84)
#define ECC_DECCON (0x100)
#define ECC_DECCNFG (0x104)
#define DEC_EMPTY_EN BIT(31)
#define DEC_CNFG_CORRECT (0x3 << 12)
#define ECC_DECIDLE (0x10C)
#define ECC_DECENUM0 (0x114)
#define ERR_MASK (0x3f)
#define ECC_DECDONE (0x124)
#define ECC_DECIRQ_EN (0x200)
#define ECC_DECIRQ_STA (0x204)
#define ECC_TIMEOUT (500000)
#define ECC_IDLE_REG(op) ((op) == ECC_ENCODE ? ECC_ENCIDLE : ECC_DECIDLE)
#define ECC_CTL_REG(op) ((op) == ECC_ENCODE ? ECC_ENCCON : ECC_DECCON)
#define ECC_IRQ_REG(op) ((op) == ECC_ENCODE ? \
ECC_ENCIRQ_EN : ECC_DECIRQ_EN)
struct mtk_ecc {
struct device *dev;
void __iomem *regs;
struct clk *clk;
struct completion done;
struct mutex lock;
u32 sectors;
};
static inline void mtk_ecc_wait_idle(struct mtk_ecc *ecc,
enum mtk_ecc_operation op)
{
struct device *dev = ecc->dev;
u32 val;
int ret;
ret = readl_poll_timeout_atomic(ecc->regs + ECC_IDLE_REG(op), val,
val & ECC_IDLE_MASK,
10, ECC_TIMEOUT);
if (ret)
dev_warn(dev, "%s NOT idle\n",
op == ECC_ENCODE ? "encoder" : "decoder");
}
static irqreturn_t mtk_ecc_irq(int irq, void *id)
{
struct mtk_ecc *ecc = id;
enum mtk_ecc_operation op;
u32 dec, enc;
dec = readw(ecc->regs + ECC_DECIRQ_STA) & ECC_IRQ_EN;
if (dec) {
op = ECC_DECODE;
dec = readw(ecc->regs + ECC_DECDONE);
if (dec & ecc->sectors) {
ecc->sectors = 0;
complete(&ecc->done);
} else {
return IRQ_HANDLED;
}
} else {
enc = readl(ecc->regs + ECC_ENCIRQ_STA) & ECC_IRQ_EN;
if (enc) {
op = ECC_ENCODE;
complete(&ecc->done);
} else {
return IRQ_NONE;
}
}
writel(0, ecc->regs + ECC_IRQ_REG(op));
return IRQ_HANDLED;
}
static void mtk_ecc_config(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
{
u32 ecc_bit = ECC_CNFG_4BIT, dec_sz, enc_sz;
u32 reg;
switch (config->strength) {
case 4:
ecc_bit = ECC_CNFG_4BIT;
break;
case 6:
ecc_bit = ECC_CNFG_6BIT;
break;
case 8:
ecc_bit = ECC_CNFG_8BIT;
break;
case 10:
ecc_bit = ECC_CNFG_10BIT;
break;
case 12:
ecc_bit = ECC_CNFG_12BIT;
break;
case 14:
ecc_bit = ECC_CNFG_14BIT;
break;
case 16:
ecc_bit = ECC_CNFG_16BIT;
break;
case 18:
ecc_bit = ECC_CNFG_18BIT;
break;
case 20:
ecc_bit = ECC_CNFG_20BIT;
break;
case 22:
ecc_bit = ECC_CNFG_22BIT;
break;
case 24:
ecc_bit = ECC_CNFG_24BIT;
break;
case 28:
ecc_bit = ECC_CNFG_28BIT;
break;
case 32:
ecc_bit = ECC_CNFG_32BIT;
break;
case 36:
ecc_bit = ECC_CNFG_36BIT;
break;
case 40:
ecc_bit = ECC_CNFG_40BIT;
break;
case 44:
ecc_bit = ECC_CNFG_44BIT;
break;
case 48:
ecc_bit = ECC_CNFG_48BIT;
break;
case 52:
ecc_bit = ECC_CNFG_52BIT;
break;
case 56:
ecc_bit = ECC_CNFG_56BIT;
break;
case 60:
ecc_bit = ECC_CNFG_60BIT;
break;
default:
dev_err(ecc->dev, "invalid strength %d, default to 4 bits\n",
config->strength);
}
if (config->op == ECC_ENCODE) {
/* configure ECC encoder (in bits) */
enc_sz = config->len << 3;
reg = ecc_bit | (config->mode << ECC_MODE_SHIFT);
reg |= (enc_sz << ECC_MS_SHIFT);
writel(reg, ecc->regs + ECC_ENCCNFG);
if (config->mode != ECC_NFI_MODE)
writel(lower_32_bits(config->addr),
ecc->regs + ECC_ENCDIADDR);
} else {
/* configure ECC decoder (in bits) */
dec_sz = (config->len << 3) +
config->strength * ECC_PARITY_BITS;
reg = ecc_bit | (config->mode << ECC_MODE_SHIFT);
reg |= (dec_sz << ECC_MS_SHIFT) | DEC_CNFG_CORRECT;
reg |= DEC_EMPTY_EN;
writel(reg, ecc->regs + ECC_DECCNFG);
if (config->sectors)
ecc->sectors = 1 << (config->sectors - 1);
}
}
void mtk_ecc_get_stats(struct mtk_ecc *ecc, struct mtk_ecc_stats *stats,
int sectors)
{
u32 offset, i, err;
u32 bitflips = 0;
stats->corrected = 0;
stats->failed = 0;
for (i = 0; i < sectors; i++) {
offset = (i >> 2) << 2;
err = readl(ecc->regs + ECC_DECENUM0 + offset);
err = err >> ((i % 4) * 8);
err &= ERR_MASK;
if (err == ERR_MASK) {
/* uncorrectable errors */
stats->failed++;
continue;
}
stats->corrected += err;
bitflips = max_t(u32, bitflips, err);
}
stats->bitflips = bitflips;
}
EXPORT_SYMBOL(mtk_ecc_get_stats);
void mtk_ecc_release(struct mtk_ecc *ecc)
{
clk_disable_unprepare(ecc->clk);
put_device(ecc->dev);
}
EXPORT_SYMBOL(mtk_ecc_release);
static void mtk_ecc_hw_init(struct mtk_ecc *ecc)
{
mtk_ecc_wait_idle(ecc, ECC_ENCODE);
writew(ECC_OP_DISABLE, ecc->regs + ECC_ENCCON);
mtk_ecc_wait_idle(ecc, ECC_DECODE);
writel(ECC_OP_DISABLE, ecc->regs + ECC_DECCON);
}
static struct mtk_ecc *mtk_ecc_get(struct device_node *np)
{
struct platform_device *pdev;
struct mtk_ecc *ecc;
pdev = of_find_device_by_node(np);
if (!pdev || !platform_get_drvdata(pdev))
return ERR_PTR(-EPROBE_DEFER);
get_device(&pdev->dev);
ecc = platform_get_drvdata(pdev);
clk_prepare_enable(ecc->clk);
mtk_ecc_hw_init(ecc);
return ecc;
}
struct mtk_ecc *of_mtk_ecc_get(struct device_node *of_node)
{
struct mtk_ecc *ecc = NULL;
struct device_node *np;
np = of_parse_phandle(of_node, "ecc-engine", 0);
if (np) {
ecc = mtk_ecc_get(np);
of_node_put(np);
}
return ecc;
}
EXPORT_SYMBOL(of_mtk_ecc_get);
int mtk_ecc_enable(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
{
enum mtk_ecc_operation op = config->op;
int ret;
ret = mutex_lock_interruptible(&ecc->lock);
if (ret) {
dev_err(ecc->dev, "interrupted when attempting to lock\n");
return ret;
}
mtk_ecc_wait_idle(ecc, op);
mtk_ecc_config(ecc, config);
writew(ECC_OP_ENABLE, ecc->regs + ECC_CTL_REG(op));
init_completion(&ecc->done);
writew(ECC_IRQ_EN, ecc->regs + ECC_IRQ_REG(op));
return 0;
}
EXPORT_SYMBOL(mtk_ecc_enable);
void mtk_ecc_disable(struct mtk_ecc *ecc)
{
enum mtk_ecc_operation op = ECC_ENCODE;
/* find out the running operation */
if (readw(ecc->regs + ECC_CTL_REG(op)) != ECC_OP_ENABLE)
op = ECC_DECODE;
/* disable it */
mtk_ecc_wait_idle(ecc, op);
writew(0, ecc->regs + ECC_IRQ_REG(op));
writew(ECC_OP_DISABLE, ecc->regs + ECC_CTL_REG(op));
mutex_unlock(&ecc->lock);
}
EXPORT_SYMBOL(mtk_ecc_disable);
int mtk_ecc_wait_done(struct mtk_ecc *ecc, enum mtk_ecc_operation op)
{
int ret;
ret = wait_for_completion_timeout(&ecc->done, msecs_to_jiffies(500));
if (!ret) {
dev_err(ecc->dev, "%s timeout - interrupt did not arrive)\n",
(op == ECC_ENCODE) ? "encoder" : "decoder");
return -ETIMEDOUT;
}
return 0;
}
EXPORT_SYMBOL(mtk_ecc_wait_done);
int mtk_ecc_encode(struct mtk_ecc *ecc, struct mtk_ecc_config *config,
u8 *data, u32 bytes)
{
dma_addr_t addr;
u32 *p, len, i;
int ret = 0;
addr = dma_map_single(ecc->dev, data, bytes, DMA_TO_DEVICE);
ret = dma_mapping_error(ecc->dev, addr);
if (ret) {
dev_err(ecc->dev, "dma mapping error\n");
return -EINVAL;
}
config->op = ECC_ENCODE;
config->addr = addr;
ret = mtk_ecc_enable(ecc, config);
if (ret) {
dma_unmap_single(ecc->dev, addr, bytes, DMA_TO_DEVICE);
return ret;
}
ret = mtk_ecc_wait_done(ecc, ECC_ENCODE);
if (ret)
goto timeout;
mtk_ecc_wait_idle(ecc, ECC_ENCODE);
/* Program ECC bytes to OOB: per sector oob = FDM + ECC + SPARE */
len = (config->strength * ECC_PARITY_BITS + 7) >> 3;
p = (u32 *)(data + bytes);
/* write the parity bytes generated by the ECC back to the OOB region */
for (i = 0; i < len; i++)
p[i] = readl(ecc->regs + ECC_ENCPAR(i));
timeout:
dma_unmap_single(ecc->dev, addr, bytes, DMA_TO_DEVICE);
mtk_ecc_disable(ecc);
return ret;
}
EXPORT_SYMBOL(mtk_ecc_encode);
void mtk_ecc_adjust_strength(u32 *p)
{
u32 ecc[] = {4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 28, 32, 36,
40, 44, 48, 52, 56, 60};
int i;
for (i = 0; i < ARRAY_SIZE(ecc); i++) {
if (*p <= ecc[i]) {
if (!i)
*p = ecc[i];
else if (*p != ecc[i])
*p = ecc[i - 1];
return;
}
}
*p = ecc[ARRAY_SIZE(ecc) - 1];
}
EXPORT_SYMBOL(mtk_ecc_adjust_strength);
static int mtk_ecc_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct mtk_ecc *ecc;
struct resource *res;
int irq, ret;
ecc = devm_kzalloc(dev, sizeof(*ecc), GFP_KERNEL);
if (!ecc)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
ecc->regs = devm_ioremap_resource(dev, res);
if (IS_ERR(ecc->regs)) {
dev_err(dev, "failed to map regs: %ld\n", PTR_ERR(ecc->regs));
return PTR_ERR(ecc->regs);
}
ecc->clk = devm_clk_get(dev, NULL);
if (IS_ERR(ecc->clk)) {
dev_err(dev, "failed to get clock: %ld\n", PTR_ERR(ecc->clk));
return PTR_ERR(ecc->clk);
}
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
dev_err(dev, "failed to get irq\n");
return -EINVAL;
}
ret = dma_set_mask(dev, DMA_BIT_MASK(32));
if (ret) {
dev_err(dev, "failed to set DMA mask\n");
return ret;
}
ret = devm_request_irq(dev, irq, mtk_ecc_irq, 0x0, "mtk-ecc", ecc);
if (ret) {
dev_err(dev, "failed to request irq\n");
return -EINVAL;
}
ecc->dev = dev;
mutex_init(&ecc->lock);
platform_set_drvdata(pdev, ecc);
dev_info(dev, "probed\n");
return 0;
}
#ifdef CONFIG_PM_SLEEP
static int mtk_ecc_suspend(struct device *dev)
{
struct mtk_ecc *ecc = dev_get_drvdata(dev);
clk_disable_unprepare(ecc->clk);
return 0;
}
static int mtk_ecc_resume(struct device *dev)
{
struct mtk_ecc *ecc = dev_get_drvdata(dev);
int ret;
ret = clk_prepare_enable(ecc->clk);
if (ret) {
dev_err(dev, "failed to enable clk\n");
return ret;
}
mtk_ecc_hw_init(ecc);
return 0;
}
static SIMPLE_DEV_PM_OPS(mtk_ecc_pm_ops, mtk_ecc_suspend, mtk_ecc_resume);
#endif
static const struct of_device_id mtk_ecc_dt_match[] = {
{ .compatible = "mediatek,mt2701-ecc" },
{},
};
MODULE_DEVICE_TABLE(of, mtk_ecc_dt_match);
static struct platform_driver mtk_ecc_driver = {
.probe = mtk_ecc_probe,
.driver = {
.name = "mtk-ecc",
.of_match_table = of_match_ptr(mtk_ecc_dt_match),
#ifdef CONFIG_PM_SLEEP
.pm = &mtk_ecc_pm_ops,
#endif
},
};
module_platform_driver(mtk_ecc_driver);
MODULE_AUTHOR("Xiaolei Li <xiaolei.li@mediatek.com>");
MODULE_DESCRIPTION("MTK Nand ECC Driver");
MODULE_LICENSE("GPL");
/*
* MTK SDG1 ECC controller
*
* Copyright (c) 2016 Mediatek
* Authors: Xiaolei Li <xiaolei.li@mediatek.com>
* Jorge Ramirez-Ortiz <jorge.ramirez-ortiz@linaro.org>
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*/
#ifndef __DRIVERS_MTD_NAND_MTK_ECC_H__
#define __DRIVERS_MTD_NAND_MTK_ECC_H__
#include <linux/types.h>
#define ECC_PARITY_BITS (14)
enum mtk_ecc_mode {ECC_DMA_MODE = 0, ECC_NFI_MODE = 1};
enum mtk_ecc_operation {ECC_ENCODE, ECC_DECODE};
struct device_node;
struct mtk_ecc;
struct mtk_ecc_stats {
u32 corrected;
u32 bitflips;
u32 failed;
};
struct mtk_ecc_config {
enum mtk_ecc_operation op;
enum mtk_ecc_mode mode;
dma_addr_t addr;
u32 strength;
u32 sectors;
u32 len;
};
int mtk_ecc_encode(struct mtk_ecc *, struct mtk_ecc_config *, u8 *, u32);
void mtk_ecc_get_stats(struct mtk_ecc *, struct mtk_ecc_stats *, int);
int mtk_ecc_wait_done(struct mtk_ecc *, enum mtk_ecc_operation);
int mtk_ecc_enable(struct mtk_ecc *, struct mtk_ecc_config *);
void mtk_ecc_disable(struct mtk_ecc *);
void mtk_ecc_adjust_strength(u32 *);
struct mtk_ecc *of_mtk_ecc_get(struct device_node *);
void mtk_ecc_release(struct mtk_ecc *);
#endif
/*
* MTK NAND Flash controller driver.
* Copyright (C) 2016 MediaTek Inc.
* Authors: Xiaolei Li <xiaolei.li@mediatek.com>
* Jorge Ramirez-Ortiz <jorge.ramirez-ortiz@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/platform_device.h>
#include <linux/dma-mapping.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <linux/clk.h>
#include <linux/mtd/nand.h>
#include <linux/mtd/mtd.h>
#include <linux/module.h>
#include <linux/iopoll.h>
#include <linux/of.h>
#include "mtk_ecc.h"
/* NAND controller register definition */
#define NFI_CNFG (0x00)
#define CNFG_AHB BIT(0)
#define CNFG_READ_EN BIT(1)
#define CNFG_DMA_BURST_EN BIT(2)
#define CNFG_BYTE_RW BIT(6)
#define CNFG_HW_ECC_EN BIT(8)
#define CNFG_AUTO_FMT_EN BIT(9)
#define CNFG_OP_CUST (6 << 12)
#define NFI_PAGEFMT (0x04)
#define PAGEFMT_FDM_ECC_SHIFT (12)
#define PAGEFMT_FDM_SHIFT (8)
#define PAGEFMT_SPARE_16 (0)
#define PAGEFMT_SPARE_26 (1)
#define PAGEFMT_SPARE_27 (2)
#define PAGEFMT_SPARE_28 (3)
#define PAGEFMT_SPARE_32 (4)
#define PAGEFMT_SPARE_36 (5)
#define PAGEFMT_SPARE_40 (6)
#define PAGEFMT_SPARE_44 (7)
#define PAGEFMT_SPARE_48 (8)
#define PAGEFMT_SPARE_49 (9)
#define PAGEFMT_SPARE_50 (0xa)
#define PAGEFMT_SPARE_51 (0xb)
#define PAGEFMT_SPARE_52 (0xc)
#define PAGEFMT_SPARE_62 (0xd)
#define PAGEFMT_SPARE_63 (0xe)
#define PAGEFMT_SPARE_64 (0xf)
#define PAGEFMT_SPARE_SHIFT (4)
#define PAGEFMT_SEC_SEL_512 BIT(2)
#define PAGEFMT_512_2K (0)
#define PAGEFMT_2K_4K (1)
#define PAGEFMT_4K_8K (2)
#define PAGEFMT_8K_16K (3)
/* NFI control */
#define NFI_CON (0x08)
#define CON_FIFO_FLUSH BIT(0)
#define CON_NFI_RST BIT(1)
#define CON_BRD BIT(8) /* burst read */
#define CON_BWR BIT(9) /* burst write */
#define CON_SEC_SHIFT (12)
/* Timming control register */
#define NFI_ACCCON (0x0C)
#define NFI_INTR_EN (0x10)
#define INTR_AHB_DONE_EN BIT(6)
#define NFI_INTR_STA (0x14)
#define NFI_CMD (0x20)
#define NFI_ADDRNOB (0x30)
#define NFI_COLADDR (0x34)
#define NFI_ROWADDR (0x38)
#define NFI_STRDATA (0x40)
#define STAR_EN (1)
#define STAR_DE (0)
#define NFI_CNRNB (0x44)
#define NFI_DATAW (0x50)
#define NFI_DATAR (0x54)
#define NFI_PIO_DIRDY (0x58)
#define PIO_DI_RDY (0x01)
#define NFI_STA (0x60)
#define STA_CMD BIT(0)
#define STA_ADDR BIT(1)
#define STA_BUSY BIT(8)
#define STA_EMP_PAGE BIT(12)
#define NFI_FSM_CUSTDATA (0xe << 16)
#define NFI_FSM_MASK (0xf << 16)
#define NFI_ADDRCNTR (0x70)
#define CNTR_MASK GENMASK(16, 12)
#define NFI_STRADDR (0x80)
#define NFI_BYTELEN (0x84)
#define NFI_CSEL (0x90)
#define NFI_FDML(x) (0xA0 + (x) * sizeof(u32) * 2)
#define NFI_FDMM(x) (0xA4 + (x) * sizeof(u32) * 2)
#define NFI_FDM_MAX_SIZE (8)
#define NFI_FDM_MIN_SIZE (1)
#define NFI_MASTER_STA (0x224)
#define MASTER_STA_MASK (0x0FFF)
#define NFI_EMPTY_THRESH (0x23C)
#define MTK_NAME "mtk-nand"
#define KB(x) ((x) * 1024UL)
#define MB(x) (KB(x) * 1024UL)
#define MTK_TIMEOUT (500000)
#define MTK_RESET_TIMEOUT (1000000)
#define MTK_MAX_SECTOR (16)
#define MTK_NAND_MAX_NSELS (2)
struct mtk_nfc_bad_mark_ctl {
void (*bm_swap)(struct mtd_info *, u8 *buf, int raw);
u32 sec;
u32 pos;
};
/*
* FDM: region used to store free OOB data
*/
struct mtk_nfc_fdm {
u32 reg_size;
u32 ecc_size;
};
struct mtk_nfc_nand_chip {
struct list_head node;
struct nand_chip nand;
struct mtk_nfc_bad_mark_ctl bad_mark;
struct mtk_nfc_fdm fdm;
u32 spare_per_sector;
int nsels;
u8 sels[0];
/* nothing after this field */
};
struct mtk_nfc_clk {
struct clk *nfi_clk;
struct clk *pad_clk;
};
struct mtk_nfc {
struct nand_hw_control controller;
struct mtk_ecc_config ecc_cfg;
struct mtk_nfc_clk clk;
struct mtk_ecc *ecc;
struct device *dev;
void __iomem *regs;
struct completion done;
struct list_head chips;
u8 *buffer;
};
static inline struct mtk_nfc_nand_chip *to_mtk_nand(struct nand_chip *nand)
{
return container_of(nand, struct mtk_nfc_nand_chip, nand);
}
static inline u8 *data_ptr(struct nand_chip *chip, const u8 *p, int i)
{
return (u8 *)p + i * chip->ecc.size;
}
static inline u8 *oob_ptr(struct nand_chip *chip, int i)
{
struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
u8 *poi;
/* map the sector's FDM data to free oob:
* the beginning of the oob area stores the FDM data of bad mark sectors
*/
if (i < mtk_nand->bad_mark.sec)
poi = chip->oob_poi + (i + 1) * mtk_nand->fdm.reg_size;
else if (i == mtk_nand->bad_mark.sec)
poi = chip->oob_poi;
else
poi = chip->oob_poi + i * mtk_nand->fdm.reg_size;
return poi;
}
static inline int mtk_data_len(struct nand_chip *chip)
{
struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
return chip->ecc.size + mtk_nand->spare_per_sector;
}
static inline u8 *mtk_data_ptr(struct nand_chip *chip, int i)
{
struct mtk_nfc *nfc = nand_get_controller_data(chip);
return nfc->buffer + i * mtk_data_len(chip);
}
static inline u8 *mtk_oob_ptr(struct nand_chip *chip, int i)
{
struct mtk_nfc *nfc = nand_get_controller_data(chip);
return nfc->buffer + i * mtk_data_len(chip) + chip->ecc.size;
}
static inline void nfi_writel(struct mtk_nfc *nfc, u32 val, u32 reg)
{
writel(val, nfc->regs + reg);
}
static inline void nfi_writew(struct mtk_nfc *nfc, u16 val, u32 reg)
{
writew(val, nfc->regs + reg);
}
static inline void nfi_writeb(struct mtk_nfc *nfc, u8 val, u32 reg)
{
writeb(val, nfc->regs + reg);
}
static inline u32 nfi_readl(struct mtk_nfc *nfc, u32 reg)
{
return readl_relaxed(nfc->regs + reg);
}
static inline u16 nfi_readw(struct mtk_nfc *nfc, u32 reg)
{
return readw_relaxed(nfc->regs + reg);
}
static inline u8 nfi_readb(struct mtk_nfc *nfc, u32 reg)
{
return readb_relaxed(nfc->regs + reg);
}
static void mtk_nfc_hw_reset(struct mtk_nfc *nfc)
{
struct device *dev = nfc->dev;
u32 val;
int ret;
/* reset all registers and force the NFI master to terminate */
nfi_writel(nfc, CON_FIFO_FLUSH | CON_NFI_RST, NFI_CON);
/* wait for the master to finish the last transaction */
ret = readl_poll_timeout(nfc->regs + NFI_MASTER_STA, val,
!(val & MASTER_STA_MASK), 50,
MTK_RESET_TIMEOUT);
if (ret)
dev_warn(dev, "master active in reset [0x%x] = 0x%x\n",
NFI_MASTER_STA, val);
/* ensure any status register affected by the NFI master is reset */
nfi_writel(nfc, CON_FIFO_FLUSH | CON_NFI_RST, NFI_CON);
nfi_writew(nfc, STAR_DE, NFI_STRDATA);
}
static int mtk_nfc_send_command(struct mtk_nfc *nfc, u8 command)
{
struct device *dev = nfc->dev;
u32 val;
int ret;
nfi_writel(nfc, command, NFI_CMD);
ret = readl_poll_timeout_atomic(nfc->regs + NFI_STA, val,
!(val & STA_CMD), 10, MTK_TIMEOUT);
if (ret) {
dev_warn(dev, "nfi core timed out entering command mode\n");
return -EIO;
}
return 0;
}
static int mtk_nfc_send_address(struct mtk_nfc *nfc, int addr)
{
struct device *dev = nfc->dev;
u32 val;
int ret;
nfi_writel(nfc, addr, NFI_COLADDR);
nfi_writel(nfc, 0, NFI_ROWADDR);
nfi_writew(nfc, 1, NFI_ADDRNOB);
ret = readl_poll_timeout_atomic(nfc->regs + NFI_STA, val,
!(val & STA_ADDR), 10, MTK_TIMEOUT);
if (ret) {
dev_warn(dev, "nfi core timed out entering address mode\n");
return -EIO;
}
return 0;
}
static int mtk_nfc_hw_runtime_config(struct mtd_info *mtd)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
struct mtk_nfc *nfc = nand_get_controller_data(chip);
u32 fmt, spare;
if (!mtd->writesize)
return 0;
spare = mtk_nand->spare_per_sector;
switch (mtd->writesize) {
case 512:
fmt = PAGEFMT_512_2K | PAGEFMT_SEC_SEL_512;
break;
case KB(2):
if (chip->ecc.size == 512)
fmt = PAGEFMT_2K_4K | PAGEFMT_SEC_SEL_512;
else
fmt = PAGEFMT_512_2K;
break;
case KB(4):
if (chip->ecc.size == 512)
fmt = PAGEFMT_4K_8K | PAGEFMT_SEC_SEL_512;
else
fmt = PAGEFMT_2K_4K;
break;
case KB(8):
if (chip->ecc.size == 512)
fmt = PAGEFMT_8K_16K | PAGEFMT_SEC_SEL_512;
else
fmt = PAGEFMT_4K_8K;
break;
case KB(16):
fmt = PAGEFMT_8K_16K;
break;
default:
dev_err(nfc->dev, "invalid page len: %d\n", mtd->writesize);
return -EINVAL;
}
/*
* the hardware will double the value for this eccsize, so we need to
* halve it
*/
if (chip->ecc.size == 1024)
spare >>= 1;
switch (spare) {
case 16:
fmt |= (PAGEFMT_SPARE_16 << PAGEFMT_SPARE_SHIFT);
break;
case 26:
fmt |= (PAGEFMT_SPARE_26 << PAGEFMT_SPARE_SHIFT);
break;
case 27:
fmt |= (PAGEFMT_SPARE_27 << PAGEFMT_SPARE_SHIFT);
break;
case 28:
fmt |= (PAGEFMT_SPARE_28 << PAGEFMT_SPARE_SHIFT);
break;
case 32:
fmt |= (PAGEFMT_SPARE_32 << PAGEFMT_SPARE_SHIFT);
break;
case 36:
fmt |= (PAGEFMT_SPARE_36 << PAGEFMT_SPARE_SHIFT);
break;
case 40:
fmt |= (PAGEFMT_SPARE_40 << PAGEFMT_SPARE_SHIFT);
break;
case 44:
fmt |= (PAGEFMT_SPARE_44 << PAGEFMT_SPARE_SHIFT);
break;
case 48:
fmt |= (PAGEFMT_SPARE_48 << PAGEFMT_SPARE_SHIFT);
break;
case 49:
fmt |= (PAGEFMT_SPARE_49 << PAGEFMT_SPARE_SHIFT);
break;
case 50:
fmt |= (PAGEFMT_SPARE_50 << PAGEFMT_SPARE_SHIFT);
break;
case 51:
fmt |= (PAGEFMT_SPARE_51 << PAGEFMT_SPARE_SHIFT);
break;
case 52:
fmt |= (PAGEFMT_SPARE_52 << PAGEFMT_SPARE_SHIFT);
break;
case 62:
fmt |= (PAGEFMT_SPARE_62 << PAGEFMT_SPARE_SHIFT);
break;
case 63:
fmt |= (PAGEFMT_SPARE_63 << PAGEFMT_SPARE_SHIFT);
break;
case 64:
fmt |= (PAGEFMT_SPARE_64 << PAGEFMT_SPARE_SHIFT);
break;
default:
dev_err(nfc->dev, "invalid spare per sector %d\n", spare);
return -EINVAL;
}
fmt |= mtk_nand->fdm.reg_size << PAGEFMT_FDM_SHIFT;
fmt |= mtk_nand->fdm.ecc_size << PAGEFMT_FDM_ECC_SHIFT;
nfi_writew(nfc, fmt, NFI_PAGEFMT);
nfc->ecc_cfg.strength = chip->ecc.strength;
nfc->ecc_cfg.len = chip->ecc.size + mtk_nand->fdm.ecc_size;
return 0;
}
static void mtk_nfc_select_chip(struct mtd_info *mtd, int chip)
{
struct nand_chip *nand = mtd_to_nand(mtd);
struct mtk_nfc *nfc = nand_get_controller_data(nand);
struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(nand);
if (chip < 0)
return;
mtk_nfc_hw_runtime_config(mtd);
nfi_writel(nfc, mtk_nand->sels[chip], NFI_CSEL);
}
static int mtk_nfc_dev_ready(struct mtd_info *mtd)
{
struct mtk_nfc *nfc = nand_get_controller_data(mtd_to_nand(mtd));
if (nfi_readl(nfc, NFI_STA) & STA_BUSY)
return 0;
return 1;
}
static void mtk_nfc_cmd_ctrl(struct mtd_info *mtd, int dat, unsigned int ctrl)
{
struct mtk_nfc *nfc = nand_get_controller_data(mtd_to_nand(mtd));
if (ctrl & NAND_ALE) {
mtk_nfc_send_address(nfc, dat);
} else if (ctrl & NAND_CLE) {
mtk_nfc_hw_reset(nfc);
nfi_writew(nfc, CNFG_OP_CUST, NFI_CNFG);
mtk_nfc_send_command(nfc, dat);
}
}
static inline void mtk_nfc_wait_ioready(struct mtk_nfc *nfc)
{
int rc;
u8 val;
rc = readb_poll_timeout_atomic(nfc->regs + NFI_PIO_DIRDY, val,
val & PIO_DI_RDY, 10, MTK_TIMEOUT);
if (rc < 0)
dev_err(nfc->dev, "data not ready\n");
}
static inline u8 mtk_nfc_read_byte(struct mtd_info *mtd)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct mtk_nfc *nfc = nand_get_controller_data(chip);
u32 reg;
/* after each byte read, the NFI_STA reg is reset by the hardware */
reg = nfi_readl(nfc, NFI_STA) & NFI_FSM_MASK;
if (reg != NFI_FSM_CUSTDATA) {
reg = nfi_readw(nfc, NFI_CNFG);
reg |= CNFG_BYTE_RW | CNFG_READ_EN;
nfi_writew(nfc, reg, NFI_CNFG);
/*
* set to max sector to allow the HW to continue reading over
* unaligned accesses
*/
reg = (MTK_MAX_SECTOR << CON_SEC_SHIFT) | CON_BRD;
nfi_writel(nfc, reg, NFI_CON);
/* trigger to fetch data */
nfi_writew(nfc, STAR_EN, NFI_STRDATA);
}
mtk_nfc_wait_ioready(nfc);
return nfi_readb(nfc, NFI_DATAR);
}
static void mtk_nfc_read_buf(struct mtd_info *mtd, u8 *buf, int len)
{
int i;
for (i = 0; i < len; i++)
buf[i] = mtk_nfc_read_byte(mtd);
}
static void mtk_nfc_write_byte(struct mtd_info *mtd, u8 byte)
{
struct mtk_nfc *nfc = nand_get_controller_data(mtd_to_nand(mtd));
u32 reg;
reg = nfi_readl(nfc, NFI_STA) & NFI_FSM_MASK;
if (reg != NFI_FSM_CUSTDATA) {
reg = nfi_readw(nfc, NFI_CNFG) | CNFG_BYTE_RW;
nfi_writew(nfc, reg, NFI_CNFG);
reg = MTK_MAX_SECTOR << CON_SEC_SHIFT | CON_BWR;
nfi_writel(nfc, reg, NFI_CON);
nfi_writew(nfc, STAR_EN, NFI_STRDATA);
}
mtk_nfc_wait_ioready(nfc);
nfi_writeb(nfc, byte, NFI_DATAW);
}
static void mtk_nfc_write_buf(struct mtd_info *mtd, const u8 *buf, int len)
{
int i;
for (i = 0; i < len; i++)
mtk_nfc_write_byte(mtd, buf[i]);
}
static int mtk_nfc_sector_encode(struct nand_chip *chip, u8 *data)
{
struct mtk_nfc *nfc = nand_get_controller_data(chip);
struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
int size = chip->ecc.size + mtk_nand->fdm.reg_size;
nfc->ecc_cfg.mode = ECC_DMA_MODE;
nfc->ecc_cfg.op = ECC_ENCODE;
return mtk_ecc_encode(nfc->ecc, &nfc->ecc_cfg, data, size);
}
static void mtk_nfc_no_bad_mark_swap(struct mtd_info *a, u8 *b, int c)
{
/* nop */
}
static void mtk_nfc_bad_mark_swap(struct mtd_info *mtd, u8 *buf, int raw)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct mtk_nfc_nand_chip *nand = to_mtk_nand(chip);
u32 bad_pos = nand->bad_mark.pos;
if (raw)
bad_pos += nand->bad_mark.sec * mtk_data_len(chip);
else
bad_pos += nand->bad_mark.sec * chip->ecc.size;
swap(chip->oob_poi[0], buf[bad_pos]);
}
static int mtk_nfc_format_subpage(struct mtd_info *mtd, u32 offset,
u32 len, const u8 *buf)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
struct mtk_nfc *nfc = nand_get_controller_data(chip);
struct mtk_nfc_fdm *fdm = &mtk_nand->fdm;
u32 start, end;
int i, ret;
start = offset / chip->ecc.size;
end = DIV_ROUND_UP(offset + len, chip->ecc.size);
memset(nfc->buffer, 0xff, mtd->writesize + mtd->oobsize);
for (i = 0; i < chip->ecc.steps; i++) {
memcpy(mtk_data_ptr(chip, i), data_ptr(chip, buf, i),
chip->ecc.size);
if (start > i || i >= end)
continue;
if (i == mtk_nand->bad_mark.sec)
mtk_nand->bad_mark.bm_swap(mtd, nfc->buffer, 1);
memcpy(mtk_oob_ptr(chip, i), oob_ptr(chip, i), fdm->reg_size);
/* program the CRC back to the OOB */
ret = mtk_nfc_sector_encode(chip, mtk_data_ptr(chip, i));
if (ret < 0)
return ret;
}
return 0;
}
static void mtk_nfc_format_page(struct mtd_info *mtd, const u8 *buf)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
struct mtk_nfc *nfc = nand_get_controller_data(chip);
struct mtk_nfc_fdm *fdm = &mtk_nand->fdm;
u32 i;
memset(nfc->buffer, 0xff, mtd->writesize + mtd->oobsize);
for (i = 0; i < chip->ecc.steps; i++) {
if (buf)
memcpy(mtk_data_ptr(chip, i), data_ptr(chip, buf, i),
chip->ecc.size);
if (i == mtk_nand->bad_mark.sec)
mtk_nand->bad_mark.bm_swap(mtd, nfc->buffer, 1);
memcpy(mtk_oob_ptr(chip, i), oob_ptr(chip, i), fdm->reg_size);
}
}
static inline void mtk_nfc_read_fdm(struct nand_chip *chip, u32 start,
u32 sectors)
{
struct mtk_nfc *nfc = nand_get_controller_data(chip);
struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
struct mtk_nfc_fdm *fdm = &mtk_nand->fdm;
u32 vall, valm;
u8 *oobptr;
int i, j;
for (i = 0; i < sectors; i++) {
oobptr = oob_ptr(chip, start + i);
vall = nfi_readl(nfc, NFI_FDML(i));
valm = nfi_readl(nfc, NFI_FDMM(i));
for (j = 0; j < fdm->reg_size; j++)
oobptr[j] = (j >= 4 ? valm : vall) >> ((j % 4) * 8);
}
}
static inline void mtk_nfc_write_fdm(struct nand_chip *chip)
{
struct mtk_nfc *nfc = nand_get_controller_data(chip);
struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
struct mtk_nfc_fdm *fdm = &mtk_nand->fdm;
u32 vall, valm;
u8 *oobptr;
int i, j;
for (i = 0; i < chip->ecc.steps; i++) {
oobptr = oob_ptr(chip, i);
vall = 0;
valm = 0;
for (j = 0; j < 8; j++) {
if (j < 4)
vall |= (j < fdm->reg_size ? oobptr[j] : 0xff)
<< (j * 8);
else
valm |= (j < fdm->reg_size ? oobptr[j] : 0xff)
<< ((j - 4) * 8);
}
nfi_writel(nfc, vall, NFI_FDML(i));
nfi_writel(nfc, valm, NFI_FDMM(i));
}
}
static int mtk_nfc_do_write_page(struct mtd_info *mtd, struct nand_chip *chip,
const u8 *buf, int page, int len)
{
struct mtk_nfc *nfc = nand_get_controller_data(chip);
struct device *dev = nfc->dev;
dma_addr_t addr;
u32 reg;
int ret;
addr = dma_map_single(dev, (void *)buf, len, DMA_TO_DEVICE);
ret = dma_mapping_error(nfc->dev, addr);
if (ret) {
dev_err(nfc->dev, "dma mapping error\n");
return -EINVAL;
}
reg = nfi_readw(nfc, NFI_CNFG) | CNFG_AHB | CNFG_DMA_BURST_EN;
nfi_writew(nfc, reg, NFI_CNFG);
nfi_writel(nfc, chip->ecc.steps << CON_SEC_SHIFT, NFI_CON);
nfi_writel(nfc, lower_32_bits(addr), NFI_STRADDR);
nfi_writew(nfc, INTR_AHB_DONE_EN, NFI_INTR_EN);
init_completion(&nfc->done);
reg = nfi_readl(nfc, NFI_CON) | CON_BWR;
nfi_writel(nfc, reg, NFI_CON);
nfi_writew(nfc, STAR_EN, NFI_STRDATA);
ret = wait_for_completion_timeout(&nfc->done, msecs_to_jiffies(500));
if (!ret) {
dev_err(dev, "program ahb done timeout\n");
nfi_writew(nfc, 0, NFI_INTR_EN);
ret = -ETIMEDOUT;
goto timeout;
}
ret = readl_poll_timeout_atomic(nfc->regs + NFI_ADDRCNTR, reg,
(reg & CNTR_MASK) >= chip->ecc.steps,
10, MTK_TIMEOUT);
if (ret)
dev_err(dev, "hwecc write timeout\n");
timeout:
dma_unmap_single(nfc->dev, addr, len, DMA_TO_DEVICE);
nfi_writel(nfc, 0, NFI_CON);
return ret;
}
static int mtk_nfc_write_page(struct mtd_info *mtd, struct nand_chip *chip,
const u8 *buf, int page, int raw)
{
struct mtk_nfc *nfc = nand_get_controller_data(chip);
struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
size_t len;
const u8 *bufpoi;
u32 reg;
int ret;
if (!raw) {
/* OOB => FDM: from register, ECC: from HW */
reg = nfi_readw(nfc, NFI_CNFG) | CNFG_AUTO_FMT_EN;
nfi_writew(nfc, reg | CNFG_HW_ECC_EN, NFI_CNFG);
nfc->ecc_cfg.op = ECC_ENCODE;
nfc->ecc_cfg.mode = ECC_NFI_MODE;
ret = mtk_ecc_enable(nfc->ecc, &nfc->ecc_cfg);
if (ret) {
/* clear NFI config */
reg = nfi_readw(nfc, NFI_CNFG);
reg &= ~(CNFG_AUTO_FMT_EN | CNFG_HW_ECC_EN);
nfi_writew(nfc, reg, NFI_CNFG);
return ret;
}
memcpy(nfc->buffer, buf, mtd->writesize);
mtk_nand->bad_mark.bm_swap(mtd, nfc->buffer, raw);
bufpoi = nfc->buffer;
/* write OOB into the FDM registers (OOB area in MTK NAND) */
mtk_nfc_write_fdm(chip);
} else {
bufpoi = buf;
}
len = mtd->writesize + (raw ? mtd->oobsize : 0);
ret = mtk_nfc_do_write_page(mtd, chip, bufpoi, page, len);
if (!raw)
mtk_ecc_disable(nfc->ecc);
return ret;
}
static int mtk_nfc_write_page_hwecc(struct mtd_info *mtd,
struct nand_chip *chip, const u8 *buf,
int oob_on, int page)
{
return mtk_nfc_write_page(mtd, chip, buf, page, 0);
}
static int mtk_nfc_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
const u8 *buf, int oob_on, int pg)
{
struct mtk_nfc *nfc = nand_get_controller_data(chip);
mtk_nfc_format_page(mtd, buf);
return mtk_nfc_write_page(mtd, chip, nfc->buffer, pg, 1);
}
static int mtk_nfc_write_subpage_hwecc(struct mtd_info *mtd,
struct nand_chip *chip, u32 offset,
u32 data_len, const u8 *buf,
int oob_on, int page)
{
struct mtk_nfc *nfc = nand_get_controller_data(chip);
int ret;
ret = mtk_nfc_format_subpage(mtd, offset, data_len, buf);
if (ret < 0)
return ret;
/* use the data in the private buffer (now with FDM and CRC) */
return mtk_nfc_write_page(mtd, chip, nfc->buffer, page, 1);
}
static int mtk_nfc_write_oob_std(struct mtd_info *mtd, struct nand_chip *chip,
int page)
{
int ret;
chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page);
ret = mtk_nfc_write_page_raw(mtd, chip, NULL, 1, page);
if (ret < 0)
return -EIO;
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
ret = chip->waitfunc(mtd, chip);
return ret & NAND_STATUS_FAIL ? -EIO : 0;
}
static int mtk_nfc_update_ecc_stats(struct mtd_info *mtd, u8 *buf, u32 sectors)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct mtk_nfc *nfc = nand_get_controller_data(chip);
struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
struct mtk_ecc_stats stats;
int rc, i;
rc = nfi_readl(nfc, NFI_STA) & STA_EMP_PAGE;
if (rc) {
memset(buf, 0xff, sectors * chip->ecc.size);
for (i = 0; i < sectors; i++)
memset(oob_ptr(chip, i), 0xff, mtk_nand->fdm.reg_size);
return 0;
}
mtk_ecc_get_stats(nfc->ecc, &stats, sectors);
mtd->ecc_stats.corrected += stats.corrected;
mtd->ecc_stats.failed += stats.failed;
return stats.bitflips;
}
static int mtk_nfc_read_subpage(struct mtd_info *mtd, struct nand_chip *chip,
u32 data_offs, u32 readlen,
u8 *bufpoi, int page, int raw)
{
struct mtk_nfc *nfc = nand_get_controller_data(chip);
struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
u32 spare = mtk_nand->spare_per_sector;
u32 column, sectors, start, end, reg;
dma_addr_t addr;
int bitflips;
size_t len;
u8 *buf;
int rc;
start = data_offs / chip->ecc.size;
end = DIV_ROUND_UP(data_offs + readlen, chip->ecc.size);
sectors = end - start;
column = start * (chip->ecc.size + spare);
len = sectors * chip->ecc.size + (raw ? sectors * spare : 0);
buf = bufpoi + start * chip->ecc.size;
if (column != 0)
chip->cmdfunc(mtd, NAND_CMD_RNDOUT, column, -1);
addr = dma_map_single(nfc->dev, buf, len, DMA_FROM_DEVICE);
rc = dma_mapping_error(nfc->dev, addr);
if (rc) {
dev_err(nfc->dev, "dma mapping error\n");
return -EINVAL;
}
reg = nfi_readw(nfc, NFI_CNFG);
reg |= CNFG_READ_EN | CNFG_DMA_BURST_EN | CNFG_AHB;
if (!raw) {
reg |= CNFG_AUTO_FMT_EN | CNFG_HW_ECC_EN;
nfi_writew(nfc, reg, NFI_CNFG);
nfc->ecc_cfg.mode = ECC_NFI_MODE;
nfc->ecc_cfg.sectors = sectors;
nfc->ecc_cfg.op = ECC_DECODE;
rc = mtk_ecc_enable(nfc->ecc, &nfc->ecc_cfg);
if (rc) {
dev_err(nfc->dev, "ecc enable\n");
/* clear NFI_CNFG */
reg &= ~(CNFG_DMA_BURST_EN | CNFG_AHB | CNFG_READ_EN |
CNFG_AUTO_FMT_EN | CNFG_HW_ECC_EN);
nfi_writew(nfc, reg, NFI_CNFG);
dma_unmap_single(nfc->dev, addr, len, DMA_FROM_DEVICE);
return rc;
}
} else {
nfi_writew(nfc, reg, NFI_CNFG);
}
nfi_writel(nfc, sectors << CON_SEC_SHIFT, NFI_CON);
nfi_writew(nfc, INTR_AHB_DONE_EN, NFI_INTR_EN);
nfi_writel(nfc, lower_32_bits(addr), NFI_STRADDR);
init_completion(&nfc->done);
reg = nfi_readl(nfc, NFI_CON) | CON_BRD;
nfi_writel(nfc, reg, NFI_CON);
nfi_writew(nfc, STAR_EN, NFI_STRDATA);
rc = wait_for_completion_timeout(&nfc->done, msecs_to_jiffies(500));
if (!rc)
dev_warn(nfc->dev, "read ahb/dma done timeout\n");
rc = readl_poll_timeout_atomic(nfc->regs + NFI_BYTELEN, reg,
(reg & CNTR_MASK) >= sectors, 10,
MTK_TIMEOUT);
if (rc < 0) {
dev_err(nfc->dev, "subpage done timeout\n");
bitflips = -EIO;
} else {
bitflips = 0;
if (!raw) {
rc = mtk_ecc_wait_done(nfc->ecc, ECC_DECODE);
bitflips = rc < 0 ? -ETIMEDOUT :
mtk_nfc_update_ecc_stats(mtd, buf, sectors);
mtk_nfc_read_fdm(chip, start, sectors);
}
}
dma_unmap_single(nfc->dev, addr, len, DMA_FROM_DEVICE);
if (raw)
goto done;
mtk_ecc_disable(nfc->ecc);
if (clamp(mtk_nand->bad_mark.sec, start, end) == mtk_nand->bad_mark.sec)
mtk_nand->bad_mark.bm_swap(mtd, bufpoi, raw);
done:
nfi_writel(nfc, 0, NFI_CON);
return bitflips;
}
static int mtk_nfc_read_subpage_hwecc(struct mtd_info *mtd,
struct nand_chip *chip, u32 off,
u32 len, u8 *p, int pg)
{
return mtk_nfc_read_subpage(mtd, chip, off, len, p, pg, 0);
}
static int mtk_nfc_read_page_hwecc(struct mtd_info *mtd,
struct nand_chip *chip, u8 *p,
int oob_on, int pg)
{
return mtk_nfc_read_subpage(mtd, chip, 0, mtd->writesize, p, pg, 0);
}
static int mtk_nfc_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
u8 *buf, int oob_on, int page)
{
struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
struct mtk_nfc *nfc = nand_get_controller_data(chip);
struct mtk_nfc_fdm *fdm = &mtk_nand->fdm;
int i, ret;
memset(nfc->buffer, 0xff, mtd->writesize + mtd->oobsize);
ret = mtk_nfc_read_subpage(mtd, chip, 0, mtd->writesize, nfc->buffer,
page, 1);
if (ret < 0)
return ret;
for (i = 0; i < chip->ecc.steps; i++) {
memcpy(oob_ptr(chip, i), mtk_oob_ptr(chip, i), fdm->reg_size);
if (i == mtk_nand->bad_mark.sec)
mtk_nand->bad_mark.bm_swap(mtd, nfc->buffer, 1);
if (buf)
memcpy(data_ptr(chip, buf, i), mtk_data_ptr(chip, i),
chip->ecc.size);
}
return ret;
}
static int mtk_nfc_read_oob_std(struct mtd_info *mtd, struct nand_chip *chip,
int page)
{
chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page);
return mtk_nfc_read_page_raw(mtd, chip, NULL, 1, page);
}
static inline void mtk_nfc_hw_init(struct mtk_nfc *nfc)
{
/*
* ACCON: access timing control register
* -------------------------------------
* 31:28: minimum required time for CS post pulling down after accessing
* the device
* 27:22: minimum required time for CS pre pulling down before accessing
* the device
* 21:16: minimum required time from NCEB low to NREB low
* 15:12: minimum required time from NWEB high to NREB low.
* 11:08: write enable hold time
* 07:04: write wait states
* 03:00: read wait states
*/
nfi_writel(nfc, 0x10804211, NFI_ACCCON);
/*
* CNRNB: nand ready/busy register
* -------------------------------
* 7:4: timeout register for polling the NAND busy/ready signal
* 0 : poll the status of the busy/ready signal after [7:4]*16 cycles.
*/
nfi_writew(nfc, 0xf1, NFI_CNRNB);
nfi_writew(nfc, PAGEFMT_8K_16K, NFI_PAGEFMT);
mtk_nfc_hw_reset(nfc);
nfi_readl(nfc, NFI_INTR_STA);
nfi_writel(nfc, 0, NFI_INTR_EN);
}
static irqreturn_t mtk_nfc_irq(int irq, void *id)
{
struct mtk_nfc *nfc = id;
u16 sta, ien;
sta = nfi_readw(nfc, NFI_INTR_STA);
ien = nfi_readw(nfc, NFI_INTR_EN);
if (!(sta & ien))
return IRQ_NONE;
nfi_writew(nfc, ~sta & ien, NFI_INTR_EN);
complete(&nfc->done);
return IRQ_HANDLED;
}
static int mtk_nfc_enable_clk(struct device *dev, struct mtk_nfc_clk *clk)
{
int ret;
ret = clk_prepare_enable(clk->nfi_clk);
if (ret) {
dev_err(dev, "failed to enable nfi clk\n");
return ret;
}
ret = clk_prepare_enable(clk->pad_clk);
if (ret) {
dev_err(dev, "failed to enable pad clk\n");
clk_disable_unprepare(clk->nfi_clk);
return ret;
}
return 0;
}
static void mtk_nfc_disable_clk(struct mtk_nfc_clk *clk)
{
clk_disable_unprepare(clk->nfi_clk);
clk_disable_unprepare(clk->pad_clk);
}
static int mtk_nfc_ooblayout_free(struct mtd_info *mtd, int section,
struct mtd_oob_region *oob_region)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
struct mtk_nfc_fdm *fdm = &mtk_nand->fdm;
u32 eccsteps;
eccsteps = mtd->writesize / chip->ecc.size;
if (section >= eccsteps)
return -ERANGE;
oob_region->length = fdm->reg_size - fdm->ecc_size;
oob_region->offset = section * fdm->reg_size + fdm->ecc_size;
return 0;
}
static int mtk_nfc_ooblayout_ecc(struct mtd_info *mtd, int section,
struct mtd_oob_region *oob_region)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
u32 eccsteps;
if (section)
return -ERANGE;
eccsteps = mtd->writesize / chip->ecc.size;
oob_region->offset = mtk_nand->fdm.reg_size * eccsteps;
oob_region->length = mtd->oobsize - oob_region->offset;
return 0;
}
static const struct mtd_ooblayout_ops mtk_nfc_ooblayout_ops = {
.free = mtk_nfc_ooblayout_free,
.ecc = mtk_nfc_ooblayout_ecc,
};
static void mtk_nfc_set_fdm(struct mtk_nfc_fdm *fdm, struct mtd_info *mtd)
{
struct nand_chip *nand = mtd_to_nand(mtd);
struct mtk_nfc_nand_chip *chip = to_mtk_nand(nand);
u32 ecc_bytes;
ecc_bytes = DIV_ROUND_UP(nand->ecc.strength * ECC_PARITY_BITS, 8);
fdm->reg_size = chip->spare_per_sector - ecc_bytes;
if (fdm->reg_size > NFI_FDM_MAX_SIZE)
fdm->reg_size = NFI_FDM_MAX_SIZE;
/* bad block mark storage */
fdm->ecc_size = 1;
}
static void mtk_nfc_set_bad_mark_ctl(struct mtk_nfc_bad_mark_ctl *bm_ctl,
struct mtd_info *mtd)
{
struct nand_chip *nand = mtd_to_nand(mtd);
if (mtd->writesize == 512) {
bm_ctl->bm_swap = mtk_nfc_no_bad_mark_swap;
} else {
bm_ctl->bm_swap = mtk_nfc_bad_mark_swap;
bm_ctl->sec = mtd->writesize / mtk_data_len(nand);
bm_ctl->pos = mtd->writesize % mtk_data_len(nand);
}
}
static void mtk_nfc_set_spare_per_sector(u32 *sps, struct mtd_info *mtd)
{
struct nand_chip *nand = mtd_to_nand(mtd);
u32 spare[] = {16, 26, 27, 28, 32, 36, 40, 44,
48, 49, 50, 51, 52, 62, 63, 64};
u32 eccsteps, i;
eccsteps = mtd->writesize / nand->ecc.size;
*sps = mtd->oobsize / eccsteps;
if (nand->ecc.size == 1024)
*sps >>= 1;
for (i = 0; i < ARRAY_SIZE(spare); i++) {
if (*sps <= spare[i]) {
if (!i)
*sps = spare[i];
else if (*sps != spare[i])
*sps = spare[i - 1];
break;
}
}
if (i >= ARRAY_SIZE(spare))
*sps = spare[ARRAY_SIZE(spare) - 1];
if (nand->ecc.size == 1024)
*sps <<= 1;
}
static int mtk_nfc_ecc_init(struct device *dev, struct mtd_info *mtd)
{
struct nand_chip *nand = mtd_to_nand(mtd);
u32 spare;
int free;
/* support only ecc hw mode */
if (nand->ecc.mode != NAND_ECC_HW) {
dev_err(dev, "ecc.mode not supported\n");
return -EINVAL;
}
/* if optional dt settings not present */
if (!nand->ecc.size || !nand->ecc.strength) {
/* use datasheet requirements */
nand->ecc.strength = nand->ecc_strength_ds;
nand->ecc.size = nand->ecc_step_ds;
/*
* align eccstrength and eccsize
* this controller only supports 512 and 1024 sizes
*/
if (nand->ecc.size < 1024) {
if (mtd->writesize > 512) {
nand->ecc.size = 1024;
nand->ecc.strength <<= 1;
} else {
nand->ecc.size = 512;
}
} else {
nand->ecc.size = 1024;
}
mtk_nfc_set_spare_per_sector(&spare, mtd);
/* calculate oob bytes except ecc parity data */
free = ((nand->ecc.strength * ECC_PARITY_BITS) + 7) >> 3;
free = spare - free;
/*
* enhance ecc strength if oob left is bigger than max FDM size
* or reduce ecc strength if oob size is not enough for ecc
* parity data.
*/
if (free > NFI_FDM_MAX_SIZE) {
spare -= NFI_FDM_MAX_SIZE;
nand->ecc.strength = (spare << 3) / ECC_PARITY_BITS;
} else if (free < 0) {
spare -= NFI_FDM_MIN_SIZE;
nand->ecc.strength = (spare << 3) / ECC_PARITY_BITS;
}
}
mtk_ecc_adjust_strength(&nand->ecc.strength);
dev_info(dev, "eccsize %d eccstrength %d\n",
nand->ecc.size, nand->ecc.strength);
return 0;
}
static int mtk_nfc_nand_chip_init(struct device *dev, struct mtk_nfc *nfc,
struct device_node *np)
{
struct mtk_nfc_nand_chip *chip;
struct nand_chip *nand;
struct mtd_info *mtd;
int nsels, len;
u32 tmp;
int ret;
int i;
if (!of_get_property(np, "reg", &nsels))
return -ENODEV;
nsels /= sizeof(u32);
if (!nsels || nsels > MTK_NAND_MAX_NSELS) {
dev_err(dev, "invalid reg property size %d\n", nsels);
return -EINVAL;
}
chip = devm_kzalloc(dev, sizeof(*chip) + nsels * sizeof(u8),
GFP_KERNEL);
if (!chip)
return -ENOMEM;
chip->nsels = nsels;
for (i = 0; i < nsels; i++) {
ret = of_property_read_u32_index(np, "reg", i, &tmp);
if (ret) {
dev_err(dev, "reg property failure : %d\n", ret);
return ret;
}
chip->sels[i] = tmp;
}
nand = &chip->nand;
nand->controller = &nfc->controller;
nand_set_flash_node(nand, np);
nand_set_controller_data(nand, nfc);
nand->options |= NAND_USE_BOUNCE_BUFFER | NAND_SUBPAGE_READ;
nand->dev_ready = mtk_nfc_dev_ready;
nand->select_chip = mtk_nfc_select_chip;
nand->write_byte = mtk_nfc_write_byte;
nand->write_buf = mtk_nfc_write_buf;
nand->read_byte = mtk_nfc_read_byte;
nand->read_buf = mtk_nfc_read_buf;
nand->cmd_ctrl = mtk_nfc_cmd_ctrl;
/* set default mode in case dt entry is missing */
nand->ecc.mode = NAND_ECC_HW;
nand->ecc.write_subpage = mtk_nfc_write_subpage_hwecc;
nand->ecc.write_page_raw = mtk_nfc_write_page_raw;
nand->ecc.write_page = mtk_nfc_write_page_hwecc;
nand->ecc.write_oob_raw = mtk_nfc_write_oob_std;
nand->ecc.write_oob = mtk_nfc_write_oob_std;
nand->ecc.read_subpage = mtk_nfc_read_subpage_hwecc;
nand->ecc.read_page_raw = mtk_nfc_read_page_raw;
nand->ecc.read_page = mtk_nfc_read_page_hwecc;
nand->ecc.read_oob_raw = mtk_nfc_read_oob_std;
nand->ecc.read_oob = mtk_nfc_read_oob_std;
mtd = nand_to_mtd(nand);
mtd->owner = THIS_MODULE;
mtd->dev.parent = dev;
mtd->name = MTK_NAME;
mtd_set_ooblayout(mtd, &mtk_nfc_ooblayout_ops);
mtk_nfc_hw_init(nfc);
ret = nand_scan_ident(mtd, nsels, NULL);
if (ret)
return -ENODEV;
/* store bbt magic in page, cause OOB is not protected */
if (nand->bbt_options & NAND_BBT_USE_FLASH)
nand->bbt_options |= NAND_BBT_NO_OOB;
ret = mtk_nfc_ecc_init(dev, mtd);
if (ret)
return -EINVAL;
if (nand->options & NAND_BUSWIDTH_16) {
dev_err(dev, "16bits buswidth not supported");
return -EINVAL;
}
mtk_nfc_set_spare_per_sector(&chip->spare_per_sector, mtd);
mtk_nfc_set_fdm(&chip->fdm, mtd);
mtk_nfc_set_bad_mark_ctl(&chip->bad_mark, mtd);
len = mtd->writesize + mtd->oobsize;
nfc->buffer = devm_kzalloc(dev, len, GFP_KERNEL);
if (!nfc->buffer)
return -ENOMEM;
ret = nand_scan_tail(mtd);
if (ret)
return -ENODEV;
ret = mtd_device_parse_register(mtd, NULL, NULL, NULL, 0);
if (ret) {
dev_err(dev, "mtd parse partition error\n");
nand_release(mtd);
return ret;
}
list_add_tail(&chip->node, &nfc->chips);
return 0;
}
static int mtk_nfc_nand_chips_init(struct device *dev, struct mtk_nfc *nfc)
{
struct device_node *np = dev->of_node;
struct device_node *nand_np;
int ret;
for_each_child_of_node(np, nand_np) {
ret = mtk_nfc_nand_chip_init(dev, nfc, nand_np);
if (ret) {
of_node_put(nand_np);
return ret;
}
}
return 0;
}
static int mtk_nfc_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
struct mtk_nfc *nfc;
struct resource *res;
int ret, irq;
nfc = devm_kzalloc(dev, sizeof(*nfc), GFP_KERNEL);
if (!nfc)
return -ENOMEM;
spin_lock_init(&nfc->controller.lock);
init_waitqueue_head(&nfc->controller.wq);
INIT_LIST_HEAD(&nfc->chips);
/* probe defer if not ready */
nfc->ecc = of_mtk_ecc_get(np);
if (IS_ERR(nfc->ecc))
return PTR_ERR(nfc->ecc);
else if (!nfc->ecc)
return -ENODEV;
nfc->dev = dev;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
nfc->regs = devm_ioremap_resource(dev, res);
if (IS_ERR(nfc->regs)) {
ret = PTR_ERR(nfc->regs);
dev_err(dev, "no nfi base\n");
goto release_ecc;
}
nfc->clk.nfi_clk = devm_clk_get(dev, "nfi_clk");
if (IS_ERR(nfc->clk.nfi_clk)) {
dev_err(dev, "no clk\n");
ret = PTR_ERR(nfc->clk.nfi_clk);
goto release_ecc;
}
nfc->clk.pad_clk = devm_clk_get(dev, "pad_clk");
if (IS_ERR(nfc->clk.pad_clk)) {
dev_err(dev, "no pad clk\n");
ret = PTR_ERR(nfc->clk.pad_clk);
goto release_ecc;
}
ret = mtk_nfc_enable_clk(dev, &nfc->clk);
if (ret)
goto release_ecc;
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
dev_err(dev, "no nfi irq resource\n");
ret = -EINVAL;
goto clk_disable;
}
ret = devm_request_irq(dev, irq, mtk_nfc_irq, 0x0, "mtk-nand", nfc);
if (ret) {
dev_err(dev, "failed to request nfi irq\n");
goto clk_disable;
}
ret = dma_set_mask(dev, DMA_BIT_MASK(32));
if (ret) {
dev_err(dev, "failed to set dma mask\n");
goto clk_disable;
}
platform_set_drvdata(pdev, nfc);
ret = mtk_nfc_nand_chips_init(dev, nfc);
if (ret) {
dev_err(dev, "failed to init nand chips\n");
goto clk_disable;
}
return 0;
clk_disable:
mtk_nfc_disable_clk(&nfc->clk);
release_ecc:
mtk_ecc_release(nfc->ecc);
return ret;
}
static int mtk_nfc_remove(struct platform_device *pdev)
{
struct mtk_nfc *nfc = platform_get_drvdata(pdev);
struct mtk_nfc_nand_chip *chip;
while (!list_empty(&nfc->chips)) {
chip = list_first_entry(&nfc->chips, struct mtk_nfc_nand_chip,
node);
nand_release(nand_to_mtd(&chip->nand));
list_del(&chip->node);
}
mtk_ecc_release(nfc->ecc);
mtk_nfc_disable_clk(&nfc->clk);
return 0;
}
#ifdef CONFIG_PM_SLEEP
static int mtk_nfc_suspend(struct device *dev)
{
struct mtk_nfc *nfc = dev_get_drvdata(dev);
mtk_nfc_disable_clk(&nfc->clk);
return 0;
}
static int mtk_nfc_resume(struct device *dev)
{
struct mtk_nfc *nfc = dev_get_drvdata(dev);
struct mtk_nfc_nand_chip *chip;
struct nand_chip *nand;
struct mtd_info *mtd;
int ret;
u32 i;
udelay(200);
ret = mtk_nfc_enable_clk(dev, &nfc->clk);
if (ret)
return ret;
mtk_nfc_hw_init(nfc);
/* reset NAND chip if VCC was powered off */
list_for_each_entry(chip, &nfc->chips, node) {
nand = &chip->nand;
mtd = nand_to_mtd(nand);
for (i = 0; i < chip->nsels; i++) {
nand->select_chip(mtd, i);
nand->cmdfunc(mtd, NAND_CMD_RESET, -1, -1);
}
}
return 0;
}
static SIMPLE_DEV_PM_OPS(mtk_nfc_pm_ops, mtk_nfc_suspend, mtk_nfc_resume);
#endif
static const struct of_device_id mtk_nfc_id_table[] = {
{ .compatible = "mediatek,mt2701-nfc" },
{}
};
MODULE_DEVICE_TABLE(of, mtk_nfc_id_table);
static struct platform_driver mtk_nfc_driver = {
.probe = mtk_nfc_probe,
.remove = mtk_nfc_remove,
.driver = {
.name = MTK_NAME,
.of_match_table = mtk_nfc_id_table,
#ifdef CONFIG_PM_SLEEP
.pm = &mtk_nfc_pm_ops,
#endif
},
};
module_platform_driver(mtk_nfc_driver);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Xiaolei Li <xiaolei.li@mediatek.com>");
MODULE_DESCRIPTION("MTK Nand Flash Controller Driver");
......@@ -2610,7 +2610,7 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to,
int cached = writelen > bytes && page != blockmask;
uint8_t *wbuf = buf;
int use_bufpoi;
int part_pagewr = (column || writelen < (mtd->writesize - 1));
int part_pagewr = (column || writelen < mtd->writesize);
if (part_pagewr)
use_bufpoi = 1;
......
......@@ -168,6 +168,7 @@ struct nand_flash_dev nand_flash_ids[] = {
/* Manufacturer IDs */
struct nand_manufacturers nand_manuf_ids[] = {
{NAND_MFR_TOSHIBA, "Toshiba"},
{NAND_MFR_ESMT, "ESMT"},
{NAND_MFR_SAMSUNG, "Samsung"},
{NAND_MFR_FUJITSU, "Fujitsu"},
{NAND_MFR_NATIONAL, "National"},
......
......@@ -118,8 +118,6 @@
#define PREFETCH_STATUS_FIFO_CNT(val) ((val >> 24) & 0x7F)
#define STATUS_BUFF_EMPTY 0x00000001
#define OMAP24XX_DMA_GPMC 4
#define SECTOR_BYTES 512
/* 4 bit padding to make byte aligned, 56 = 52 + 4 */
#define BCH4_BIT_PAD 4
......@@ -1811,7 +1809,6 @@ static int omap_nand_probe(struct platform_device *pdev)
struct nand_chip *nand_chip;
int err;
dma_cap_mask_t mask;
unsigned sig;
struct resource *res;
struct device *dev = &pdev->dev;
int min_oobbytes = BADBLOCK_MARKER_LENGTH;
......@@ -1924,11 +1921,11 @@ static int omap_nand_probe(struct platform_device *pdev)
case NAND_OMAP_PREFETCH_DMA:
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);
sig = OMAP24XX_DMA_GPMC;
info->dma = dma_request_channel(mask, omap_dma_filter_fn, &sig);
if (!info->dma) {
info->dma = dma_request_chan(pdev->dev.parent, "rxtx");
if (IS_ERR(info->dma)) {
dev_err(&pdev->dev, "DMA engine request failed\n");
err = -ENXIO;
err = PTR_ERR(info->dma);
goto return_error;
} else {
struct dma_slave_config cfg;
......
......@@ -39,6 +39,7 @@
#include <linux/gpio.h>
#include <linux/interrupt.h>
#include <linux/iopoll.h>
#include <linux/reset.h>
#define NFC_REG_CTL 0x0000
#define NFC_REG_ST 0x0004
......@@ -153,6 +154,7 @@
/* define bit use in NFC_ECC_ST */
#define NFC_ECC_ERR(x) BIT(x)
#define NFC_ECC_ERR_MSK GENMASK(15, 0)
#define NFC_ECC_PAT_FOUND(x) BIT(x + 16)
#define NFC_ECC_ERR_CNT(b, x) (((x) >> (((b) % 4) * 8)) & 0xff)
......@@ -269,10 +271,12 @@ struct sunxi_nfc {
void __iomem *regs;
struct clk *ahb_clk;
struct clk *mod_clk;
struct reset_control *reset;
unsigned long assigned_cs;
unsigned long clk_rate;
struct list_head chips;
struct completion complete;
struct dma_chan *dmac;
};
static inline struct sunxi_nfc *to_sunxi_nfc(struct nand_hw_control *ctrl)
......@@ -365,6 +369,67 @@ static int sunxi_nfc_rst(struct sunxi_nfc *nfc)
return ret;
}
static int sunxi_nfc_dma_op_prepare(struct mtd_info *mtd, const void *buf,
int chunksize, int nchunks,
enum dma_data_direction ddir,
struct scatterlist *sg)
{
struct nand_chip *nand = mtd_to_nand(mtd);
struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
struct dma_async_tx_descriptor *dmad;
enum dma_transfer_direction tdir;
dma_cookie_t dmat;
int ret;
if (ddir == DMA_FROM_DEVICE)
tdir = DMA_DEV_TO_MEM;
else
tdir = DMA_MEM_TO_DEV;
sg_init_one(sg, buf, nchunks * chunksize);
ret = dma_map_sg(nfc->dev, sg, 1, ddir);
if (!ret)
return -ENOMEM;
dmad = dmaengine_prep_slave_sg(nfc->dmac, sg, 1, tdir, DMA_CTRL_ACK);
if (!dmad) {
ret = -EINVAL;
goto err_unmap_buf;
}
writel(readl(nfc->regs + NFC_REG_CTL) | NFC_RAM_METHOD,
nfc->regs + NFC_REG_CTL);
writel(nchunks, nfc->regs + NFC_REG_SECTOR_NUM);
writel(chunksize, nfc->regs + NFC_REG_CNT);
dmat = dmaengine_submit(dmad);
ret = dma_submit_error(dmat);
if (ret)
goto err_clr_dma_flag;
return 0;
err_clr_dma_flag:
writel(readl(nfc->regs + NFC_REG_CTL) & ~NFC_RAM_METHOD,
nfc->regs + NFC_REG_CTL);
err_unmap_buf:
dma_unmap_sg(nfc->dev, sg, 1, ddir);
return ret;
}
static void sunxi_nfc_dma_op_cleanup(struct mtd_info *mtd,
enum dma_data_direction ddir,
struct scatterlist *sg)
{
struct nand_chip *nand = mtd_to_nand(mtd);
struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
dma_unmap_sg(nfc->dev, sg, 1, ddir);
writel(readl(nfc->regs + NFC_REG_CTL) & ~NFC_RAM_METHOD,
nfc->regs + NFC_REG_CTL);
}
static int sunxi_nfc_dev_ready(struct mtd_info *mtd)
{
struct nand_chip *nand = mtd_to_nand(mtd);
......@@ -822,17 +887,15 @@ static void sunxi_nfc_hw_ecc_update_stats(struct mtd_info *mtd,
}
static int sunxi_nfc_hw_ecc_correct(struct mtd_info *mtd, u8 *data, u8 *oob,
int step, bool *erased)
int step, u32 status, bool *erased)
{
struct nand_chip *nand = mtd_to_nand(mtd);
struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
struct nand_ecc_ctrl *ecc = &nand->ecc;
u32 status, tmp;
u32 tmp;
*erased = false;
status = readl(nfc->regs + NFC_REG_ECC_ST);
if (status & NFC_ECC_ERR(step))
return -EBADMSG;
......@@ -898,6 +961,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
*cur_off = oob_off + ecc->bytes + 4;
ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob_required ? oob : NULL, 0,
readl(nfc->regs + NFC_REG_ECC_ST),
&erased);
if (erased)
return 1;
......@@ -967,6 +1031,130 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
*cur_off = mtd->oobsize + mtd->writesize;
}
static int sunxi_nfc_hw_ecc_read_chunks_dma(struct mtd_info *mtd, uint8_t *buf,
int oob_required, int page,
int nchunks)
{
struct nand_chip *nand = mtd_to_nand(mtd);
bool randomized = nand->options & NAND_NEED_SCRAMBLING;
struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
struct nand_ecc_ctrl *ecc = &nand->ecc;
unsigned int max_bitflips = 0;
int ret, i, raw_mode = 0;
struct scatterlist sg;
u32 status;
ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
if (ret)
return ret;
ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, nchunks,
DMA_FROM_DEVICE, &sg);
if (ret)
return ret;
sunxi_nfc_hw_ecc_enable(mtd);
sunxi_nfc_randomizer_config(mtd, page, false);
sunxi_nfc_randomizer_enable(mtd);
writel((NAND_CMD_RNDOUTSTART << 16) | (NAND_CMD_RNDOUT << 8) |
NAND_CMD_READSTART, nfc->regs + NFC_REG_RCMD_SET);
dma_async_issue_pending(nfc->dmac);
writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD | NFC_DATA_TRANS,
nfc->regs + NFC_REG_CMD);
ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
if (ret)
dmaengine_terminate_all(nfc->dmac);
sunxi_nfc_randomizer_disable(mtd);
sunxi_nfc_hw_ecc_disable(mtd);
sunxi_nfc_dma_op_cleanup(mtd, DMA_FROM_DEVICE, &sg);
if (ret)
return ret;
status = readl(nfc->regs + NFC_REG_ECC_ST);
for (i = 0; i < nchunks; i++) {
int data_off = i * ecc->size;
int oob_off = i * (ecc->bytes + 4);
u8 *data = buf + data_off;
u8 *oob = nand->oob_poi + oob_off;
bool erased;
ret = sunxi_nfc_hw_ecc_correct(mtd, randomized ? data : NULL,
oob_required ? oob : NULL,
i, status, &erased);
/* ECC errors are handled in the second loop. */
if (ret < 0)
continue;
if (oob_required && !erased) {
/* TODO: use DMA to retrieve OOB */
nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
mtd->writesize + oob_off, -1);
nand->read_buf(mtd, oob, ecc->bytes + 4);
sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, i,
!i, page);
}
if (erased)
raw_mode = 1;
sunxi_nfc_hw_ecc_update_stats(mtd, &max_bitflips, ret);
}
if (status & NFC_ECC_ERR_MSK) {
for (i = 0; i < nchunks; i++) {
int data_off = i * ecc->size;
int oob_off = i * (ecc->bytes + 4);
u8 *data = buf + data_off;
u8 *oob = nand->oob_poi + oob_off;
if (!(status & NFC_ECC_ERR(i)))
continue;
/*
* Re-read the data with the randomizer disabled to
* identify bitflips in erased pages.
*/
if (randomized) {
/* TODO: use DMA to read page in raw mode */
nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
data_off, -1);
nand->read_buf(mtd, data, ecc->size);
}
/* TODO: use DMA to retrieve OOB */
nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
mtd->writesize + oob_off, -1);
nand->read_buf(mtd, oob, ecc->bytes + 4);
ret = nand_check_erased_ecc_chunk(data, ecc->size,
oob, ecc->bytes + 4,
NULL, 0,
ecc->strength);
if (ret >= 0)
raw_mode = 1;
sunxi_nfc_hw_ecc_update_stats(mtd, &max_bitflips, ret);
}
}
if (oob_required)
sunxi_nfc_hw_ecc_read_extra_oob(mtd, nand->oob_poi,
NULL, !raw_mode,
page);
return max_bitflips;
}
static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
const u8 *data, int data_off,
const u8 *oob, int oob_off,
......@@ -1065,6 +1253,23 @@ static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
return max_bitflips;
}
static int sunxi_nfc_hw_ecc_read_page_dma(struct mtd_info *mtd,
struct nand_chip *chip, u8 *buf,
int oob_required, int page)
{
int ret;
ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, oob_required, page,
chip->ecc.steps);
if (ret >= 0)
return ret;
/* Fallback to PIO mode */
chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1);
return sunxi_nfc_hw_ecc_read_page(mtd, chip, buf, oob_required, page);
}
static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
struct nand_chip *chip,
u32 data_offs, u32 readlen,
......@@ -1098,6 +1303,25 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
return max_bitflips;
}
static int sunxi_nfc_hw_ecc_read_subpage_dma(struct mtd_info *mtd,
struct nand_chip *chip,
u32 data_offs, u32 readlen,
u8 *buf, int page)
{
int nchunks = DIV_ROUND_UP(data_offs + readlen, chip->ecc.size);
int ret;
ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, false, page, nchunks);
if (ret >= 0)
return ret;
/* Fallback to PIO mode */
chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1);
return sunxi_nfc_hw_ecc_read_subpage(mtd, chip, data_offs, readlen,
buf, page);
}
static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
struct nand_chip *chip,
const uint8_t *buf, int oob_required,
......@@ -1130,6 +1354,99 @@ static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
return 0;
}
static int sunxi_nfc_hw_ecc_write_subpage(struct mtd_info *mtd,
struct nand_chip *chip,
u32 data_offs, u32 data_len,
const u8 *buf, int oob_required,
int page)
{
struct nand_ecc_ctrl *ecc = &chip->ecc;
int ret, i, cur_off = 0;
sunxi_nfc_hw_ecc_enable(mtd);
for (i = data_offs / ecc->size;
i < DIV_ROUND_UP(data_offs + data_len, ecc->size); i++) {
int data_off = i * ecc->size;
int oob_off = i * (ecc->bytes + 4);
const u8 *data = buf + data_off;
const u8 *oob = chip->oob_poi + oob_off;
ret = sunxi_nfc_hw_ecc_write_chunk(mtd, data, data_off, oob,
oob_off + mtd->writesize,
&cur_off, !i, page);
if (ret)
return ret;
}
sunxi_nfc_hw_ecc_disable(mtd);
return 0;
}
static int sunxi_nfc_hw_ecc_write_page_dma(struct mtd_info *mtd,
struct nand_chip *chip,
const u8 *buf,
int oob_required,
int page)
{
struct nand_chip *nand = mtd_to_nand(mtd);
struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
struct nand_ecc_ctrl *ecc = &nand->ecc;
struct scatterlist sg;
int ret, i;
ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
if (ret)
return ret;
ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, ecc->steps,
DMA_TO_DEVICE, &sg);
if (ret)
goto pio_fallback;
for (i = 0; i < ecc->steps; i++) {
const u8 *oob = nand->oob_poi + (i * (ecc->bytes + 4));
sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, i, !i, page);
}
sunxi_nfc_hw_ecc_enable(mtd);
sunxi_nfc_randomizer_config(mtd, page, false);
sunxi_nfc_randomizer_enable(mtd);
writel((NAND_CMD_RNDIN << 8) | NAND_CMD_PAGEPROG,
nfc->regs + NFC_REG_RCMD_SET);
dma_async_issue_pending(nfc->dmac);
writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD |
NFC_DATA_TRANS | NFC_ACCESS_DIR,
nfc->regs + NFC_REG_CMD);
ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
if (ret)
dmaengine_terminate_all(nfc->dmac);
sunxi_nfc_randomizer_disable(mtd);
sunxi_nfc_hw_ecc_disable(mtd);
sunxi_nfc_dma_op_cleanup(mtd, DMA_TO_DEVICE, &sg);
if (ret)
return ret;
if (oob_required || (chip->options & NAND_NEED_SCRAMBLING))
/* TODO: use DMA to transfer extra OOB bytes ? */
sunxi_nfc_hw_ecc_write_extra_oob(mtd, chip->oob_poi,
NULL, page);
return 0;
pio_fallback:
return sunxi_nfc_hw_ecc_write_page(mtd, chip, buf, oob_required, page);
}
static int sunxi_nfc_hw_syndrome_ecc_read_page(struct mtd_info *mtd,
struct nand_chip *chip,
uint8_t *buf, int oob_required,
......@@ -1497,10 +1814,19 @@ static int sunxi_nand_hw_common_ecc_ctrl_init(struct mtd_info *mtd,
int ret;
int i;
if (ecc->size != 512 && ecc->size != 1024)
return -EINVAL;
data = kzalloc(sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
/* Prefer 1k ECC chunk over 512 ones */
if (ecc->size == 512 && mtd->writesize > 512) {
ecc->size = 1024;
ecc->strength *= 2;
}
/* Add ECC info retrieval from DT */
for (i = 0; i < ARRAY_SIZE(strengths); i++) {
if (ecc->strength <= strengths[i])
......@@ -1550,14 +1876,28 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
struct nand_ecc_ctrl *ecc,
struct device_node *np)
{
struct nand_chip *nand = mtd_to_nand(mtd);
struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand);
struct sunxi_nfc *nfc = to_sunxi_nfc(sunxi_nand->nand.controller);
int ret;
ret = sunxi_nand_hw_common_ecc_ctrl_init(mtd, ecc, np);
if (ret)
return ret;
ecc->read_page = sunxi_nfc_hw_ecc_read_page;
ecc->write_page = sunxi_nfc_hw_ecc_write_page;
if (nfc->dmac) {
ecc->read_page = sunxi_nfc_hw_ecc_read_page_dma;
ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage_dma;
ecc->write_page = sunxi_nfc_hw_ecc_write_page_dma;
nand->options |= NAND_USE_BOUNCE_BUFFER;
} else {
ecc->read_page = sunxi_nfc_hw_ecc_read_page;
ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
ecc->write_page = sunxi_nfc_hw_ecc_write_page;
}
/* TODO: support DMA for raw accesses and subpage write */
ecc->write_subpage = sunxi_nfc_hw_ecc_write_subpage;
ecc->read_oob_raw = nand_read_oob_std;
ecc->write_oob_raw = nand_write_oob_std;
ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
......@@ -1871,26 +2211,59 @@ static int sunxi_nfc_probe(struct platform_device *pdev)
if (ret)
goto out_ahb_clk_unprepare;
nfc->reset = devm_reset_control_get_optional(dev, "ahb");
if (!IS_ERR(nfc->reset)) {
ret = reset_control_deassert(nfc->reset);
if (ret) {
dev_err(dev, "reset err %d\n", ret);
goto out_mod_clk_unprepare;
}
} else if (PTR_ERR(nfc->reset) != -ENOENT) {
ret = PTR_ERR(nfc->reset);
goto out_mod_clk_unprepare;
}
ret = sunxi_nfc_rst(nfc);
if (ret)
goto out_mod_clk_unprepare;
goto out_ahb_reset_reassert;
writel(0, nfc->regs + NFC_REG_INT);
ret = devm_request_irq(dev, irq, sunxi_nfc_interrupt,
0, "sunxi-nand", nfc);
if (ret)
goto out_mod_clk_unprepare;
goto out_ahb_reset_reassert;
nfc->dmac = dma_request_slave_channel(dev, "rxtx");
if (nfc->dmac) {
struct dma_slave_config dmac_cfg = { };
dmac_cfg.src_addr = r->start + NFC_REG_IO_DATA;
dmac_cfg.dst_addr = dmac_cfg.src_addr;
dmac_cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
dmac_cfg.dst_addr_width = dmac_cfg.src_addr_width;
dmac_cfg.src_maxburst = 4;
dmac_cfg.dst_maxburst = 4;
dmaengine_slave_config(nfc->dmac, &dmac_cfg);
} else {
dev_warn(dev, "failed to request rxtx DMA channel\n");
}
platform_set_drvdata(pdev, nfc);
ret = sunxi_nand_chips_init(dev, nfc);
if (ret) {
dev_err(dev, "failed to init nand chips\n");
goto out_mod_clk_unprepare;
goto out_release_dmac;
}
return 0;
out_release_dmac:
if (nfc->dmac)
dma_release_channel(nfc->dmac);
out_ahb_reset_reassert:
if (!IS_ERR(nfc->reset))
reset_control_assert(nfc->reset);
out_mod_clk_unprepare:
clk_disable_unprepare(nfc->mod_clk);
out_ahb_clk_unprepare:
......@@ -1904,6 +2277,12 @@ static int sunxi_nfc_remove(struct platform_device *pdev)
struct sunxi_nfc *nfc = platform_get_drvdata(pdev);
sunxi_nand_chips_cleanup(nfc);
if (!IS_ERR(nfc->reset))
reset_control_assert(nfc->reset);
if (nfc->dmac)
dma_release_channel(nfc->dmac);
clk_disable_unprepare(nfc->mod_clk);
clk_disable_unprepare(nfc->ahb_clk);
......
......@@ -4,6 +4,7 @@
* by the Free Software Foundation.
*
* Copyright © 2012 John Crispin <blogic@openwrt.org>
* Copyright © 2016 Hauke Mehrtens <hauke@hauke-m.de>
*/
#include <linux/mtd/nand.h>
......@@ -16,20 +17,28 @@
#define EBU_ADDSEL1 0x24
#define EBU_NAND_CON 0xB0
#define EBU_NAND_WAIT 0xB4
#define NAND_WAIT_RD BIT(0) /* NAND flash status output */
#define NAND_WAIT_WR_C BIT(3) /* NAND Write/Read complete */
#define EBU_NAND_ECC0 0xB8
#define EBU_NAND_ECC_AC 0xBC
/* nand commands */
#define NAND_CMD_ALE (1 << 2)
#define NAND_CMD_CLE (1 << 3)
#define NAND_CMD_CS (1 << 4)
#define NAND_WRITE_CMD_RESET 0xff
/*
* nand commands
* The pins of the NAND chip are selected based on the address bits of the
* "register" read and write. There are no special registers, but an
* address range and the lower address bits are used to activate the
* correct line. For example when the bit (1 << 2) is set in the address
* the ALE pin will be activated.
*/
#define NAND_CMD_ALE BIT(2) /* address latch enable */
#define NAND_CMD_CLE BIT(3) /* command latch enable */
#define NAND_CMD_CS BIT(4) /* chip select */
#define NAND_CMD_SE BIT(5) /* spare area access latch */
#define NAND_CMD_WP BIT(6) /* write protect */
#define NAND_WRITE_CMD (NAND_CMD_CS | NAND_CMD_CLE)
#define NAND_WRITE_ADDR (NAND_CMD_CS | NAND_CMD_ALE)
#define NAND_WRITE_DATA (NAND_CMD_CS)
#define NAND_READ_DATA (NAND_CMD_CS)
#define NAND_WAIT_WR_C (1 << 3)
#define NAND_WAIT_RD (0x1)
/* we need to tel the ebu which addr we mapped the nand to */
#define ADDSEL1_MASK(x) (x << 4)
......@@ -54,31 +63,41 @@
#define NAND_CON_CSMUX (1 << 1)
#define NAND_CON_NANDM 1
static void xway_reset_chip(struct nand_chip *chip)
struct xway_nand_data {
struct nand_chip chip;
unsigned long csflags;
void __iomem *nandaddr;
};
static u8 xway_readb(struct mtd_info *mtd, int op)
{
unsigned long nandaddr = (unsigned long) chip->IO_ADDR_W;
unsigned long flags;
struct nand_chip *chip = mtd_to_nand(mtd);
struct xway_nand_data *data = nand_get_controller_data(chip);
nandaddr &= ~NAND_WRITE_ADDR;
nandaddr |= NAND_WRITE_CMD;
return readb(data->nandaddr + op);
}
/* finish with a reset */
spin_lock_irqsave(&ebu_lock, flags);
writeb(NAND_WRITE_CMD_RESET, (void __iomem *) nandaddr);
while ((ltq_ebu_r32(EBU_NAND_WAIT) & NAND_WAIT_WR_C) == 0)
;
spin_unlock_irqrestore(&ebu_lock, flags);
static void xway_writeb(struct mtd_info *mtd, int op, u8 value)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct xway_nand_data *data = nand_get_controller_data(chip);
writeb(value, data->nandaddr + op);
}
static void xway_select_chip(struct mtd_info *mtd, int chip)
static void xway_select_chip(struct mtd_info *mtd, int select)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct xway_nand_data *data = nand_get_controller_data(chip);
switch (chip) {
switch (select) {
case -1:
ltq_ebu_w32_mask(NAND_CON_CE, 0, EBU_NAND_CON);
ltq_ebu_w32_mask(NAND_CON_NANDM, 0, EBU_NAND_CON);
spin_unlock_irqrestore(&ebu_lock, data->csflags);
break;
case 0:
spin_lock_irqsave(&ebu_lock, data->csflags);
ltq_ebu_w32_mask(0, NAND_CON_NANDM, EBU_NAND_CON);
ltq_ebu_w32_mask(0, NAND_CON_CE, EBU_NAND_CON);
break;
......@@ -89,26 +108,16 @@ static void xway_select_chip(struct mtd_info *mtd, int chip)
static void xway_cmd_ctrl(struct mtd_info *mtd, int cmd, unsigned int ctrl)
{
struct nand_chip *this = mtd_to_nand(mtd);
unsigned long nandaddr = (unsigned long) this->IO_ADDR_W;
unsigned long flags;
if (ctrl & NAND_CTRL_CHANGE) {
nandaddr &= ~(NAND_WRITE_CMD | NAND_WRITE_ADDR);
if (ctrl & NAND_CLE)
nandaddr |= NAND_WRITE_CMD;
else
nandaddr |= NAND_WRITE_ADDR;
this->IO_ADDR_W = (void __iomem *) nandaddr;
}
if (cmd == NAND_CMD_NONE)
return;
if (cmd != NAND_CMD_NONE) {
spin_lock_irqsave(&ebu_lock, flags);
writeb(cmd, this->IO_ADDR_W);
while ((ltq_ebu_r32(EBU_NAND_WAIT) & NAND_WAIT_WR_C) == 0)
;
spin_unlock_irqrestore(&ebu_lock, flags);
}
if (ctrl & NAND_CLE)
xway_writeb(mtd, NAND_WRITE_CMD, cmd);
else if (ctrl & NAND_ALE)
xway_writeb(mtd, NAND_WRITE_ADDR, cmd);
while ((ltq_ebu_r32(EBU_NAND_WAIT) & NAND_WAIT_WR_C) == 0)
;
}
static int xway_dev_ready(struct mtd_info *mtd)
......@@ -118,80 +127,122 @@ static int xway_dev_ready(struct mtd_info *mtd)
static unsigned char xway_read_byte(struct mtd_info *mtd)
{
struct nand_chip *this = mtd_to_nand(mtd);
unsigned long nandaddr = (unsigned long) this->IO_ADDR_R;
unsigned long flags;
int ret;
return xway_readb(mtd, NAND_READ_DATA);
}
static void xway_read_buf(struct mtd_info *mtd, u_char *buf, int len)
{
int i;
spin_lock_irqsave(&ebu_lock, flags);
ret = ltq_r8((void __iomem *)(nandaddr + NAND_READ_DATA));
spin_unlock_irqrestore(&ebu_lock, flags);
for (i = 0; i < len; i++)
buf[i] = xway_readb(mtd, NAND_WRITE_DATA);
}
return ret;
static void xway_write_buf(struct mtd_info *mtd, const u_char *buf, int len)
{
int i;
for (i = 0; i < len; i++)
xway_writeb(mtd, NAND_WRITE_DATA, buf[i]);
}
/*
* Probe for the NAND device.
*/
static int xway_nand_probe(struct platform_device *pdev)
{
struct nand_chip *this = platform_get_drvdata(pdev);
unsigned long nandaddr = (unsigned long) this->IO_ADDR_W;
const __be32 *cs = of_get_property(pdev->dev.of_node,
"lantiq,cs", NULL);
struct xway_nand_data *data;
struct mtd_info *mtd;
struct resource *res;
int err;
u32 cs;
u32 cs_flag = 0;
/* Allocate memory for the device structure (and zero it) */
data = devm_kzalloc(&pdev->dev, sizeof(struct xway_nand_data),
GFP_KERNEL);
if (!data)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
data->nandaddr = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(data->nandaddr))
return PTR_ERR(data->nandaddr);
nand_set_flash_node(&data->chip, pdev->dev.of_node);
mtd = nand_to_mtd(&data->chip);
mtd->dev.parent = &pdev->dev;
data->chip.cmd_ctrl = xway_cmd_ctrl;
data->chip.dev_ready = xway_dev_ready;
data->chip.select_chip = xway_select_chip;
data->chip.write_buf = xway_write_buf;
data->chip.read_buf = xway_read_buf;
data->chip.read_byte = xway_read_byte;
data->chip.chip_delay = 30;
data->chip.ecc.mode = NAND_ECC_SOFT;
data->chip.ecc.algo = NAND_ECC_HAMMING;
platform_set_drvdata(pdev, data);
nand_set_controller_data(&data->chip, data);
/* load our CS from the DT. Either we find a valid 1 or default to 0 */
if (cs && (*cs == 1))
err = of_property_read_u32(pdev->dev.of_node, "lantiq,cs", &cs);
if (!err && cs == 1)
cs_flag = NAND_CON_IN_CS1 | NAND_CON_OUT_CS1;
/* setup the EBU to run in NAND mode on our base addr */
ltq_ebu_w32(CPHYSADDR(nandaddr)
| ADDSEL1_MASK(3) | ADDSEL1_REGEN, EBU_ADDSEL1);
ltq_ebu_w32(CPHYSADDR(data->nandaddr)
| ADDSEL1_MASK(3) | ADDSEL1_REGEN, EBU_ADDSEL1);
ltq_ebu_w32(BUSCON1_SETUP | BUSCON1_BCGEN_RES | BUSCON1_WAITWRC2
| BUSCON1_WAITRDC2 | BUSCON1_HOLDC1 | BUSCON1_RECOVC1
| BUSCON1_CMULT4, LTQ_EBU_BUSCON1);
| BUSCON1_WAITRDC2 | BUSCON1_HOLDC1 | BUSCON1_RECOVC1
| BUSCON1_CMULT4, LTQ_EBU_BUSCON1);
ltq_ebu_w32(NAND_CON_NANDM | NAND_CON_CSMUX | NAND_CON_CS_P
| NAND_CON_SE_P | NAND_CON_WP_P | NAND_CON_PRE_P
| cs_flag, EBU_NAND_CON);
| NAND_CON_SE_P | NAND_CON_WP_P | NAND_CON_PRE_P
| cs_flag, EBU_NAND_CON);
/* finish with a reset */
xway_reset_chip(this);
/* Scan to find existence of the device */
err = nand_scan(mtd, 1);
if (err)
return err;
return 0;
}
err = mtd_device_register(mtd, NULL, 0);
if (err)
nand_release(mtd);
static struct platform_nand_data xway_nand_data = {
.chip = {
.nr_chips = 1,
.chip_delay = 30,
},
.ctrl = {
.probe = xway_nand_probe,
.cmd_ctrl = xway_cmd_ctrl,
.dev_ready = xway_dev_ready,
.select_chip = xway_select_chip,
.read_byte = xway_read_byte,
}
};
return err;
}
/*
* Try to find the node inside the DT. If it is available attach out
* platform_nand_data
* Remove a NAND device.
*/
static int __init xway_register_nand(void)
static int xway_nand_remove(struct platform_device *pdev)
{
struct device_node *node;
struct platform_device *pdev;
node = of_find_compatible_node(NULL, NULL, "lantiq,nand-xway");
if (!node)
return -ENOENT;
pdev = of_find_device_by_node(node);
if (!pdev)
return -EINVAL;
pdev->dev.platform_data = &xway_nand_data;
of_node_put(node);
struct xway_nand_data *data = platform_get_drvdata(pdev);
nand_release(nand_to_mtd(&data->chip));
return 0;
}
subsys_initcall(xway_register_nand);
static const struct of_device_id xway_nand_match[] = {
{ .compatible = "lantiq,nand-xway" },
{},
};
MODULE_DEVICE_TABLE(of, xway_nand_match);
static struct platform_driver xway_nand_driver = {
.probe = xway_nand_probe,
.remove = xway_nand_remove,
.driver = {
.name = "lantiq,nand-xway",
.of_match_table = xway_nand_match,
},
};
module_platform_driver(xway_nand_driver);
MODULE_LICENSE("GPL");
......@@ -3188,13 +3188,13 @@ static int onenand_otp_walk(struct mtd_info *mtd, loff_t from, size_t len,
size_t tmp_retlen;
ret = action(mtd, from, len, &tmp_retlen, buf);
if (ret)
break;
buf += tmp_retlen;
len -= tmp_retlen;
*retlen += tmp_retlen;
if (ret)
break;
}
otp_pages--;
}
......
......@@ -29,6 +29,26 @@ config MTD_SPI_NOR_USE_4K_SECTORS
Please note that some tools/drivers/filesystems may not work with
4096 B erase size (e.g. UBIFS requires 15 KiB as a minimum).
config SPI_ATMEL_QUADSPI
tristate "Atmel Quad SPI Controller"
depends on ARCH_AT91 || (ARM && COMPILE_TEST)
depends on OF && HAS_IOMEM
help
This enables support for the Quad SPI controller in master mode.
This driver does not support generic SPI. The implementation only
supports SPI NOR.
config SPI_CADENCE_QUADSPI
tristate "Cadence Quad SPI controller"
depends on OF && ARM
help
Enable support for the Cadence Quad SPI Flash controller.
Cadence QSPI is a specialized controller for connecting an SPI
Flash over 1/2/4-bit wide bus. Enable this option if you have a
device with a Cadence QSPI controller and want to access the
Flash as an MTD device.
config SPI_FSL_QUADSPI
tristate "Freescale Quad SPI controller"
depends on ARCH_MXC || SOC_LS1021A || ARCH_LAYERSCAPE || COMPILE_TEST
......@@ -38,6 +58,13 @@ config SPI_FSL_QUADSPI
This controller does not support generic SPI. It only supports
SPI NOR.
config SPI_HISI_SFC
tristate "Hisilicon SPI-NOR Flash Controller(SFC)"
depends on ARCH_HISI || COMPILE_TEST
depends on HAS_IOMEM && HAS_DMA
help
This enables support for hisilicon SPI-NOR flash controller.
config SPI_NXP_SPIFI
tristate "NXP SPI Flash Interface (SPIFI)"
depends on OF && (ARCH_LPC18XX || COMPILE_TEST)
......
obj-$(CONFIG_MTD_SPI_NOR) += spi-nor.o
obj-$(CONFIG_SPI_ATMEL_QUADSPI) += atmel-quadspi.o
obj-$(CONFIG_SPI_CADENCE_QUADSPI) += cadence-quadspi.o
obj-$(CONFIG_SPI_FSL_QUADSPI) += fsl-quadspi.o
obj-$(CONFIG_SPI_HISI_SFC) += hisi-sfc.o
obj-$(CONFIG_MTD_MT81xx_NOR) += mtk-quadspi.o
obj-$(CONFIG_SPI_NXP_SPIFI) += nxp-spifi.o
/*
* Driver for Atmel QSPI Controller
*
* Copyright (C) 2015 Atmel Corporation
*
* Author: Cyrille Pitchen <cyrille.pitchen@atmel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*
* This driver is based on drivers/mtd/spi-nor/fsl-quadspi.c from Freescale.
*/
#include <linux/kernel.h>
#include <linux/clk.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/interrupt.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
#include <linux/mtd/spi-nor.h>
#include <linux/platform_data/atmel.h>
#include <linux/of.h>
#include <linux/io.h>
#include <linux/gpio.h>
#include <linux/pinctrl/consumer.h>
/* QSPI register offsets */
#define QSPI_CR 0x0000 /* Control Register */
#define QSPI_MR 0x0004 /* Mode Register */
#define QSPI_RD 0x0008 /* Receive Data Register */
#define QSPI_TD 0x000c /* Transmit Data Register */
#define QSPI_SR 0x0010 /* Status Register */
#define QSPI_IER 0x0014 /* Interrupt Enable Register */
#define QSPI_IDR 0x0018 /* Interrupt Disable Register */
#define QSPI_IMR 0x001c /* Interrupt Mask Register */
#define QSPI_SCR 0x0020 /* Serial Clock Register */
#define QSPI_IAR 0x0030 /* Instruction Address Register */
#define QSPI_ICR 0x0034 /* Instruction Code Register */
#define QSPI_IFR 0x0038 /* Instruction Frame Register */
#define QSPI_SMR 0x0040 /* Scrambling Mode Register */
#define QSPI_SKR 0x0044 /* Scrambling Key Register */
#define QSPI_WPMR 0x00E4 /* Write Protection Mode Register */
#define QSPI_WPSR 0x00E8 /* Write Protection Status Register */
#define QSPI_VERSION 0x00FC /* Version Register */
/* Bitfields in QSPI_CR (Control Register) */
#define QSPI_CR_QSPIEN BIT(0)
#define QSPI_CR_QSPIDIS BIT(1)
#define QSPI_CR_SWRST BIT(7)
#define QSPI_CR_LASTXFER BIT(24)
/* Bitfields in QSPI_MR (Mode Register) */
#define QSPI_MR_SSM BIT(0)
#define QSPI_MR_LLB BIT(1)
#define QSPI_MR_WDRBT BIT(2)
#define QSPI_MR_SMRM BIT(3)
#define QSPI_MR_CSMODE_MASK GENMASK(5, 4)
#define QSPI_MR_CSMODE_NOT_RELOADED (0 << 4)
#define QSPI_MR_CSMODE_LASTXFER (1 << 4)
#define QSPI_MR_CSMODE_SYSTEMATICALLY (2 << 4)
#define QSPI_MR_NBBITS_MASK GENMASK(11, 8)
#define QSPI_MR_NBBITS(n) ((((n) - 8) << 8) & QSPI_MR_NBBITS_MASK)
#define QSPI_MR_DLYBCT_MASK GENMASK(23, 16)
#define QSPI_MR_DLYBCT(n) (((n) << 16) & QSPI_MR_DLYBCT_MASK)
#define QSPI_MR_DLYCS_MASK GENMASK(31, 24)
#define QSPI_MR_DLYCS(n) (((n) << 24) & QSPI_MR_DLYCS_MASK)
/* Bitfields in QSPI_SR/QSPI_IER/QSPI_IDR/QSPI_IMR */
#define QSPI_SR_RDRF BIT(0)
#define QSPI_SR_TDRE BIT(1)
#define QSPI_SR_TXEMPTY BIT(2)
#define QSPI_SR_OVRES BIT(3)
#define QSPI_SR_CSR BIT(8)
#define QSPI_SR_CSS BIT(9)
#define QSPI_SR_INSTRE BIT(10)
#define QSPI_SR_QSPIENS BIT(24)
#define QSPI_SR_CMD_COMPLETED (QSPI_SR_INSTRE | QSPI_SR_CSR)
/* Bitfields in QSPI_SCR (Serial Clock Register) */
#define QSPI_SCR_CPOL BIT(0)
#define QSPI_SCR_CPHA BIT(1)
#define QSPI_SCR_SCBR_MASK GENMASK(15, 8)
#define QSPI_SCR_SCBR(n) (((n) << 8) & QSPI_SCR_SCBR_MASK)
#define QSPI_SCR_DLYBS_MASK GENMASK(23, 16)
#define QSPI_SCR_DLYBS(n) (((n) << 16) & QSPI_SCR_DLYBS_MASK)
/* Bitfields in QSPI_ICR (Instruction Code Register) */
#define QSPI_ICR_INST_MASK GENMASK(7, 0)
#define QSPI_ICR_INST(inst) (((inst) << 0) & QSPI_ICR_INST_MASK)
#define QSPI_ICR_OPT_MASK GENMASK(23, 16)
#define QSPI_ICR_OPT(opt) (((opt) << 16) & QSPI_ICR_OPT_MASK)
/* Bitfields in QSPI_IFR (Instruction Frame Register) */
#define QSPI_IFR_WIDTH_MASK GENMASK(2, 0)
#define QSPI_IFR_WIDTH_SINGLE_BIT_SPI (0 << 0)
#define QSPI_IFR_WIDTH_DUAL_OUTPUT (1 << 0)
#define QSPI_IFR_WIDTH_QUAD_OUTPUT (2 << 0)
#define QSPI_IFR_WIDTH_DUAL_IO (3 << 0)
#define QSPI_IFR_WIDTH_QUAD_IO (4 << 0)
#define QSPI_IFR_WIDTH_DUAL_CMD (5 << 0)
#define QSPI_IFR_WIDTH_QUAD_CMD (6 << 0)
#define QSPI_IFR_INSTEN BIT(4)
#define QSPI_IFR_ADDREN BIT(5)
#define QSPI_IFR_OPTEN BIT(6)
#define QSPI_IFR_DATAEN BIT(7)
#define QSPI_IFR_OPTL_MASK GENMASK(9, 8)
#define QSPI_IFR_OPTL_1BIT (0 << 8)
#define QSPI_IFR_OPTL_2BIT (1 << 8)
#define QSPI_IFR_OPTL_4BIT (2 << 8)
#define QSPI_IFR_OPTL_8BIT (3 << 8)
#define QSPI_IFR_ADDRL BIT(10)
#define QSPI_IFR_TFRTYP_MASK GENMASK(13, 12)
#define QSPI_IFR_TFRTYP_TRSFR_READ (0 << 12)
#define QSPI_IFR_TFRTYP_TRSFR_READ_MEM (1 << 12)
#define QSPI_IFR_TFRTYP_TRSFR_WRITE (2 << 12)
#define QSPI_IFR_TFRTYP_TRSFR_WRITE_MEM (3 << 13)
#define QSPI_IFR_CRM BIT(14)
#define QSPI_IFR_NBDUM_MASK GENMASK(20, 16)
#define QSPI_IFR_NBDUM(n) (((n) << 16) & QSPI_IFR_NBDUM_MASK)
/* Bitfields in QSPI_SMR (Scrambling Mode Register) */
#define QSPI_SMR_SCREN BIT(0)
#define QSPI_SMR_RVDIS BIT(1)
/* Bitfields in QSPI_WPMR (Write Protection Mode Register) */
#define QSPI_WPMR_WPEN BIT(0)
#define QSPI_WPMR_WPKEY_MASK GENMASK(31, 8)
#define QSPI_WPMR_WPKEY(wpkey) (((wpkey) << 8) & QSPI_WPMR_WPKEY_MASK)
/* Bitfields in QSPI_WPSR (Write Protection Status Register) */
#define QSPI_WPSR_WPVS BIT(0)
#define QSPI_WPSR_WPVSRC_MASK GENMASK(15, 8)
#define QSPI_WPSR_WPVSRC(src) (((src) << 8) & QSPI_WPSR_WPVSRC)
struct atmel_qspi {
void __iomem *regs;
void __iomem *mem;
struct clk *clk;
struct platform_device *pdev;
u32 pending;
struct spi_nor nor;
u32 clk_rate;
struct completion cmd_completion;
};
struct atmel_qspi_command {
union {
struct {
u32 instruction:1;
u32 address:3;
u32 mode:1;
u32 dummy:1;
u32 data:1;
u32 reserved:25;
} bits;
u32 word;
} enable;
u8 instruction;
u8 mode;
u8 num_mode_cycles;
u8 num_dummy_cycles;
u32 address;
size_t buf_len;
const void *tx_buf;
void *rx_buf;
};
/* Register access functions */
static inline u32 qspi_readl(struct atmel_qspi *aq, u32 reg)
{
return readl_relaxed(aq->regs + reg);
}
static inline void qspi_writel(struct atmel_qspi *aq, u32 reg, u32 value)
{
writel_relaxed(value, aq->regs + reg);
}
static int atmel_qspi_run_transfer(struct atmel_qspi *aq,
const struct atmel_qspi_command *cmd)
{
void __iomem *ahb_mem;
/* Then fallback to a PIO transfer (memcpy() DOES NOT work!) */
ahb_mem = aq->mem;
if (cmd->enable.bits.address)
ahb_mem += cmd->address;
if (cmd->tx_buf)
_memcpy_toio(ahb_mem, cmd->tx_buf, cmd->buf_len);
else
_memcpy_fromio(cmd->rx_buf, ahb_mem, cmd->buf_len);
return 0;
}
#ifdef DEBUG
static void atmel_qspi_debug_command(struct atmel_qspi *aq,
const struct atmel_qspi_command *cmd,
u32 ifr)
{
u8 cmd_buf[SPI_NOR_MAX_CMD_SIZE];
size_t len = 0;
int i;
if (cmd->enable.bits.instruction)
cmd_buf[len++] = cmd->instruction;
for (i = cmd->enable.bits.address-1; i >= 0; --i)
cmd_buf[len++] = (cmd->address >> (i << 3)) & 0xff;
if (cmd->enable.bits.mode)
cmd_buf[len++] = cmd->mode;
if (cmd->enable.bits.dummy) {
int num = cmd->num_dummy_cycles;
switch (ifr & QSPI_IFR_WIDTH_MASK) {
case QSPI_IFR_WIDTH_SINGLE_BIT_SPI:
case QSPI_IFR_WIDTH_DUAL_OUTPUT:
case QSPI_IFR_WIDTH_QUAD_OUTPUT:
num >>= 3;
break;
case QSPI_IFR_WIDTH_DUAL_IO:
case QSPI_IFR_WIDTH_DUAL_CMD:
num >>= 2;
break;
case QSPI_IFR_WIDTH_QUAD_IO:
case QSPI_IFR_WIDTH_QUAD_CMD:
num >>= 1;
break;
default:
return;
}
for (i = 0; i < num; ++i)
cmd_buf[len++] = 0;
}
/* Dump the SPI command */
print_hex_dump(KERN_DEBUG, "qspi cmd: ", DUMP_PREFIX_NONE,
32, 1, cmd_buf, len, false);
#ifdef VERBOSE_DEBUG
/* If verbose debug is enabled, also dump the TX data */
if (cmd->enable.bits.data && cmd->tx_buf)
print_hex_dump(KERN_DEBUG, "qspi tx : ", DUMP_PREFIX_NONE,
32, 1, cmd->tx_buf, cmd->buf_len, false);
#endif
}
#else
#define atmel_qspi_debug_command(aq, cmd, ifr)
#endif
static int atmel_qspi_run_command(struct atmel_qspi *aq,
const struct atmel_qspi_command *cmd,
u32 ifr_tfrtyp, u32 ifr_width)
{
u32 iar, icr, ifr, sr;
int err = 0;
iar = 0;
icr = 0;
ifr = ifr_tfrtyp | ifr_width;
/* Compute instruction parameters */
if (cmd->enable.bits.instruction) {
icr |= QSPI_ICR_INST(cmd->instruction);
ifr |= QSPI_IFR_INSTEN;
}
/* Compute address parameters */
switch (cmd->enable.bits.address) {
case 4:
ifr |= QSPI_IFR_ADDRL;
/* fall through to the 24bit (3 byte) address case. */
case 3:
iar = (cmd->enable.bits.data) ? 0 : cmd->address;
ifr |= QSPI_IFR_ADDREN;
break;
case 0:
break;
default:
return -EINVAL;
}
/* Compute option parameters */
if (cmd->enable.bits.mode && cmd->num_mode_cycles) {
u32 mode_cycle_bits, mode_bits;
icr |= QSPI_ICR_OPT(cmd->mode);
ifr |= QSPI_IFR_OPTEN;
switch (ifr & QSPI_IFR_WIDTH_MASK) {
case QSPI_IFR_WIDTH_SINGLE_BIT_SPI:
case QSPI_IFR_WIDTH_DUAL_OUTPUT:
case QSPI_IFR_WIDTH_QUAD_OUTPUT:
mode_cycle_bits = 1;
break;
case QSPI_IFR_WIDTH_DUAL_IO:
case QSPI_IFR_WIDTH_DUAL_CMD:
mode_cycle_bits = 2;
break;
case QSPI_IFR_WIDTH_QUAD_IO:
case QSPI_IFR_WIDTH_QUAD_CMD:
mode_cycle_bits = 4;
break;
default:
return -EINVAL;
}
mode_bits = cmd->num_mode_cycles * mode_cycle_bits;
switch (mode_bits) {
case 1:
ifr |= QSPI_IFR_OPTL_1BIT;
break;
case 2:
ifr |= QSPI_IFR_OPTL_2BIT;
break;
case 4:
ifr |= QSPI_IFR_OPTL_4BIT;
break;
case 8:
ifr |= QSPI_IFR_OPTL_8BIT;
break;
default:
return -EINVAL;
}
}
/* Set number of dummy cycles */
if (cmd->enable.bits.dummy)
ifr |= QSPI_IFR_NBDUM(cmd->num_dummy_cycles);
/* Set data enable */
if (cmd->enable.bits.data) {
ifr |= QSPI_IFR_DATAEN;
/* Special case for Continuous Read Mode */
if (!cmd->tx_buf && !cmd->rx_buf)
ifr |= QSPI_IFR_CRM;
}
/* Clear pending interrupts */
(void)qspi_readl(aq, QSPI_SR);
/* Set QSPI Instruction Frame registers */
atmel_qspi_debug_command(aq, cmd, ifr);
qspi_writel(aq, QSPI_IAR, iar);
qspi_writel(aq, QSPI_ICR, icr);
qspi_writel(aq, QSPI_IFR, ifr);
/* Skip to the final steps if there is no data */
if (!cmd->enable.bits.data)
goto no_data;
/* Dummy read of QSPI_IFR to synchronize APB and AHB accesses */
(void)qspi_readl(aq, QSPI_IFR);
/* Stop here for continuous read */
if (!cmd->tx_buf && !cmd->rx_buf)
return 0;
/* Send/Receive data */
err = atmel_qspi_run_transfer(aq, cmd);
/* Release the chip-select */
qspi_writel(aq, QSPI_CR, QSPI_CR_LASTXFER);
if (err)
return err;
#if defined(DEBUG) && defined(VERBOSE_DEBUG)
/*
* If verbose debug is enabled, also dump the RX data in addition to
* the SPI command previously dumped by atmel_qspi_debug_command()
*/
if (cmd->rx_buf)
print_hex_dump(KERN_DEBUG, "qspi rx : ", DUMP_PREFIX_NONE,
32, 1, cmd->rx_buf, cmd->buf_len, false);
#endif
no_data:
/* Poll INSTRuction End status */
sr = qspi_readl(aq, QSPI_SR);
if ((sr & QSPI_SR_CMD_COMPLETED) == QSPI_SR_CMD_COMPLETED)
return err;
/* Wait for INSTRuction End interrupt */
reinit_completion(&aq->cmd_completion);
aq->pending = sr & QSPI_SR_CMD_COMPLETED;
qspi_writel(aq, QSPI_IER, QSPI_SR_CMD_COMPLETED);
if (!wait_for_completion_timeout(&aq->cmd_completion,
msecs_to_jiffies(1000)))
err = -ETIMEDOUT;
qspi_writel(aq, QSPI_IDR, QSPI_SR_CMD_COMPLETED);
return err;
}
static int atmel_qspi_read_reg(struct spi_nor *nor, u8 opcode,
u8 *buf, int len)
{
struct atmel_qspi *aq = nor->priv;
struct atmel_qspi_command cmd;
memset(&cmd, 0, sizeof(cmd));
cmd.enable.bits.instruction = 1;
cmd.enable.bits.data = 1;
cmd.instruction = opcode;
cmd.rx_buf = buf;
cmd.buf_len = len;
return atmel_qspi_run_command(aq, &cmd, QSPI_IFR_TFRTYP_TRSFR_READ,
QSPI_IFR_WIDTH_SINGLE_BIT_SPI);
}
static int atmel_qspi_write_reg(struct spi_nor *nor, u8 opcode,
u8 *buf, int len)
{
struct atmel_qspi *aq = nor->priv;
struct atmel_qspi_command cmd;
memset(&cmd, 0, sizeof(cmd));
cmd.enable.bits.instruction = 1;
cmd.enable.bits.data = (buf != NULL && len > 0);
cmd.instruction = opcode;
cmd.tx_buf = buf;
cmd.buf_len = len;
return atmel_qspi_run_command(aq, &cmd, QSPI_IFR_TFRTYP_TRSFR_WRITE,
QSPI_IFR_WIDTH_SINGLE_BIT_SPI);
}
static ssize_t atmel_qspi_write(struct spi_nor *nor, loff_t to, size_t len,
const u_char *write_buf)
{
struct atmel_qspi *aq = nor->priv;
struct atmel_qspi_command cmd;
ssize_t ret;
memset(&cmd, 0, sizeof(cmd));
cmd.enable.bits.instruction = 1;
cmd.enable.bits.address = nor->addr_width;
cmd.enable.bits.data = 1;
cmd.instruction = nor->program_opcode;
cmd.address = (u32)to;
cmd.tx_buf = write_buf;
cmd.buf_len = len;
ret = atmel_qspi_run_command(aq, &cmd, QSPI_IFR_TFRTYP_TRSFR_WRITE_MEM,
QSPI_IFR_WIDTH_SINGLE_BIT_SPI);
return (ret < 0) ? ret : len;
}
static int atmel_qspi_erase(struct spi_nor *nor, loff_t offs)
{
struct atmel_qspi *aq = nor->priv;
struct atmel_qspi_command cmd;
memset(&cmd, 0, sizeof(cmd));
cmd.enable.bits.instruction = 1;
cmd.enable.bits.address = nor->addr_width;
cmd.instruction = nor->erase_opcode;
cmd.address = (u32)offs;
return atmel_qspi_run_command(aq, &cmd, QSPI_IFR_TFRTYP_TRSFR_WRITE,
QSPI_IFR_WIDTH_SINGLE_BIT_SPI);
}
static ssize_t atmel_qspi_read(struct spi_nor *nor, loff_t from, size_t len,
u_char *read_buf)
{
struct atmel_qspi *aq = nor->priv;
struct atmel_qspi_command cmd;
u8 num_mode_cycles, num_dummy_cycles;
u32 ifr_width;
ssize_t ret;
switch (nor->flash_read) {
case SPI_NOR_NORMAL:
case SPI_NOR_FAST:
ifr_width = QSPI_IFR_WIDTH_SINGLE_BIT_SPI;
break;
case SPI_NOR_DUAL:
ifr_width = QSPI_IFR_WIDTH_DUAL_OUTPUT;
break;
case SPI_NOR_QUAD:
ifr_width = QSPI_IFR_WIDTH_QUAD_OUTPUT;
break;
default:
return -EINVAL;
}
if (nor->read_dummy >= 2) {
num_mode_cycles = 2;
num_dummy_cycles = nor->read_dummy - 2;
} else {
num_mode_cycles = nor->read_dummy;
num_dummy_cycles = 0;
}
memset(&cmd, 0, sizeof(cmd));
cmd.enable.bits.instruction = 1;
cmd.enable.bits.address = nor->addr_width;
cmd.enable.bits.mode = (num_mode_cycles > 0);
cmd.enable.bits.dummy = (num_dummy_cycles > 0);
cmd.enable.bits.data = 1;
cmd.instruction = nor->read_opcode;
cmd.address = (u32)from;
cmd.mode = 0xff; /* This value prevents from entering the 0-4-4 mode */
cmd.num_mode_cycles = num_mode_cycles;
cmd.num_dummy_cycles = num_dummy_cycles;
cmd.rx_buf = read_buf;
cmd.buf_len = len;
ret = atmel_qspi_run_command(aq, &cmd, QSPI_IFR_TFRTYP_TRSFR_READ_MEM,
ifr_width);
return (ret < 0) ? ret : len;
}
static int atmel_qspi_init(struct atmel_qspi *aq)
{
unsigned long src_rate;
u32 mr, scr, scbr;
/* Reset the QSPI controller */
qspi_writel(aq, QSPI_CR, QSPI_CR_SWRST);
/* Set the QSPI controller in Serial Memory Mode */
mr = QSPI_MR_NBBITS(8) | QSPI_MR_SSM;
qspi_writel(aq, QSPI_MR, mr);
src_rate = clk_get_rate(aq->clk);
if (!src_rate)
return -EINVAL;
/* Compute the QSPI baudrate */
scbr = DIV_ROUND_UP(src_rate, aq->clk_rate);
if (scbr > 0)
scbr--;
scr = QSPI_SCR_SCBR(scbr);
qspi_writel(aq, QSPI_SCR, scr);
/* Enable the QSPI controller */
qspi_writel(aq, QSPI_CR, QSPI_CR_QSPIEN);
return 0;
}
static irqreturn_t atmel_qspi_interrupt(int irq, void *dev_id)
{
struct atmel_qspi *aq = (struct atmel_qspi *)dev_id;
u32 status, mask, pending;
status = qspi_readl(aq, QSPI_SR);
mask = qspi_readl(aq, QSPI_IMR);
pending = status & mask;
if (!pending)
return IRQ_NONE;
aq->pending |= pending;
if ((aq->pending & QSPI_SR_CMD_COMPLETED) == QSPI_SR_CMD_COMPLETED)
complete(&aq->cmd_completion);
return IRQ_HANDLED;
}
static int atmel_qspi_probe(struct platform_device *pdev)
{
struct device_node *child, *np = pdev->dev.of_node;
struct atmel_qspi *aq;
struct resource *res;
struct spi_nor *nor;
struct mtd_info *mtd;
int irq, err = 0;
if (of_get_child_count(np) != 1)
return -ENODEV;
child = of_get_next_child(np, NULL);
aq = devm_kzalloc(&pdev->dev, sizeof(*aq), GFP_KERNEL);
if (!aq) {
err = -ENOMEM;
goto exit;
}
platform_set_drvdata(pdev, aq);
init_completion(&aq->cmd_completion);
aq->pdev = pdev;
/* Map the registers */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi_base");
aq->regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(aq->regs)) {
dev_err(&pdev->dev, "missing registers\n");
err = PTR_ERR(aq->regs);
goto exit;
}
/* Map the AHB memory */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi_mmap");
aq->mem = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(aq->mem)) {
dev_err(&pdev->dev, "missing AHB memory\n");
err = PTR_ERR(aq->mem);
goto exit;
}
/* Get the peripheral clock */
aq->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(aq->clk)) {
dev_err(&pdev->dev, "missing peripheral clock\n");
err = PTR_ERR(aq->clk);
goto exit;
}
/* Enable the peripheral clock */
err = clk_prepare_enable(aq->clk);
if (err) {
dev_err(&pdev->dev, "failed to enable the peripheral clock\n");
goto exit;
}
/* Request the IRQ */
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
dev_err(&pdev->dev, "missing IRQ\n");
err = irq;
goto disable_clk;
}
err = devm_request_irq(&pdev->dev, irq, atmel_qspi_interrupt,
0, dev_name(&pdev->dev), aq);
if (err)
goto disable_clk;
/* Setup the spi-nor */
nor = &aq->nor;
mtd = &nor->mtd;
nor->dev = &pdev->dev;
spi_nor_set_flash_node(nor, child);
nor->priv = aq;
mtd->priv = nor;
nor->read_reg = atmel_qspi_read_reg;
nor->write_reg = atmel_qspi_write_reg;
nor->read = atmel_qspi_read;
nor->write = atmel_qspi_write;
nor->erase = atmel_qspi_erase;
err = of_property_read_u32(child, "spi-max-frequency", &aq->clk_rate);
if (err < 0)
goto disable_clk;
err = atmel_qspi_init(aq);
if (err)
goto disable_clk;
err = spi_nor_scan(nor, NULL, SPI_NOR_QUAD);
if (err)
goto disable_clk;
err = mtd_device_register(mtd, NULL, 0);
if (err)
goto disable_clk;
of_node_put(child);
return 0;
disable_clk:
clk_disable_unprepare(aq->clk);
exit:
of_node_put(child);
return err;
}
static int atmel_qspi_remove(struct platform_device *pdev)
{
struct atmel_qspi *aq = platform_get_drvdata(pdev);
mtd_device_unregister(&aq->nor.mtd);
qspi_writel(aq, QSPI_CR, QSPI_CR_QSPIDIS);
clk_disable_unprepare(aq->clk);
return 0;
}
static const struct of_device_id atmel_qspi_dt_ids[] = {
{ .compatible = "atmel,sama5d2-qspi" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, atmel_qspi_dt_ids);
static struct platform_driver atmel_qspi_driver = {
.driver = {
.name = "atmel_qspi",
.of_match_table = atmel_qspi_dt_ids,
},
.probe = atmel_qspi_probe,
.remove = atmel_qspi_remove,
};
module_platform_driver(atmel_qspi_driver);
MODULE_AUTHOR("Cyrille Pitchen <cyrille.pitchen@atmel.com>");
MODULE_DESCRIPTION("Atmel QSPI Controller driver");
MODULE_LICENSE("GPL v2");
/*
* Driver for Cadence QSPI Controller
*
* Copyright Altera Corporation (C) 2012-2014. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/clk.h>
#include <linux/completion.h>
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/errno.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/jiffies.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
#include <linux/mtd/spi-nor.h>
#include <linux/of_device.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/sched.h>
#include <linux/spi/spi.h>
#include <linux/timer.h>
#define CQSPI_NAME "cadence-qspi"
#define CQSPI_MAX_CHIPSELECT 16
struct cqspi_st;
struct cqspi_flash_pdata {
struct spi_nor nor;
struct cqspi_st *cqspi;
u32 clk_rate;
u32 read_delay;
u32 tshsl_ns;
u32 tsd2d_ns;
u32 tchsh_ns;
u32 tslch_ns;
u8 inst_width;
u8 addr_width;
u8 data_width;
u8 cs;
bool registered;
};
struct cqspi_st {
struct platform_device *pdev;
struct clk *clk;
unsigned int sclk;
void __iomem *iobase;
void __iomem *ahb_base;
struct completion transfer_complete;
struct mutex bus_mutex;
int current_cs;
int current_page_size;
int current_erase_size;
int current_addr_width;
unsigned long master_ref_clk_hz;
bool is_decoded_cs;
u32 fifo_depth;
u32 fifo_width;
u32 trigger_address;
struct cqspi_flash_pdata f_pdata[CQSPI_MAX_CHIPSELECT];
};
/* Operation timeout value */
#define CQSPI_TIMEOUT_MS 500
#define CQSPI_READ_TIMEOUT_MS 10
/* Instruction type */
#define CQSPI_INST_TYPE_SINGLE 0
#define CQSPI_INST_TYPE_DUAL 1
#define CQSPI_INST_TYPE_QUAD 2
#define CQSPI_DUMMY_CLKS_PER_BYTE 8
#define CQSPI_DUMMY_BYTES_MAX 4
#define CQSPI_DUMMY_CLKS_MAX 31
#define CQSPI_STIG_DATA_LEN_MAX 8
/* Register map */
#define CQSPI_REG_CONFIG 0x00
#define CQSPI_REG_CONFIG_ENABLE_MASK BIT(0)
#define CQSPI_REG_CONFIG_DECODE_MASK BIT(9)
#define CQSPI_REG_CONFIG_CHIPSELECT_LSB 10
#define CQSPI_REG_CONFIG_DMA_MASK BIT(15)
#define CQSPI_REG_CONFIG_BAUD_LSB 19
#define CQSPI_REG_CONFIG_IDLE_LSB 31
#define CQSPI_REG_CONFIG_CHIPSELECT_MASK 0xF
#define CQSPI_REG_CONFIG_BAUD_MASK 0xF
#define CQSPI_REG_RD_INSTR 0x04
#define CQSPI_REG_RD_INSTR_OPCODE_LSB 0
#define CQSPI_REG_RD_INSTR_TYPE_INSTR_LSB 8
#define CQSPI_REG_RD_INSTR_TYPE_ADDR_LSB 12
#define CQSPI_REG_RD_INSTR_TYPE_DATA_LSB 16
#define CQSPI_REG_RD_INSTR_MODE_EN_LSB 20
#define CQSPI_REG_RD_INSTR_DUMMY_LSB 24
#define CQSPI_REG_RD_INSTR_TYPE_INSTR_MASK 0x3
#define CQSPI_REG_RD_INSTR_TYPE_ADDR_MASK 0x3
#define CQSPI_REG_RD_INSTR_TYPE_DATA_MASK 0x3
#define CQSPI_REG_RD_INSTR_DUMMY_MASK 0x1F
#define CQSPI_REG_WR_INSTR 0x08
#define CQSPI_REG_WR_INSTR_OPCODE_LSB 0
#define CQSPI_REG_WR_INSTR_TYPE_ADDR_LSB 12
#define CQSPI_REG_WR_INSTR_TYPE_DATA_LSB 16
#define CQSPI_REG_DELAY 0x0C
#define CQSPI_REG_DELAY_TSLCH_LSB 0
#define CQSPI_REG_DELAY_TCHSH_LSB 8
#define CQSPI_REG_DELAY_TSD2D_LSB 16
#define CQSPI_REG_DELAY_TSHSL_LSB 24
#define CQSPI_REG_DELAY_TSLCH_MASK 0xFF
#define CQSPI_REG_DELAY_TCHSH_MASK 0xFF
#define CQSPI_REG_DELAY_TSD2D_MASK 0xFF
#define CQSPI_REG_DELAY_TSHSL_MASK 0xFF
#define CQSPI_REG_READCAPTURE 0x10
#define CQSPI_REG_READCAPTURE_BYPASS_LSB 0
#define CQSPI_REG_READCAPTURE_DELAY_LSB 1
#define CQSPI_REG_READCAPTURE_DELAY_MASK 0xF
#define CQSPI_REG_SIZE 0x14
#define CQSPI_REG_SIZE_ADDRESS_LSB 0
#define CQSPI_REG_SIZE_PAGE_LSB 4
#define CQSPI_REG_SIZE_BLOCK_LSB 16
#define CQSPI_REG_SIZE_ADDRESS_MASK 0xF
#define CQSPI_REG_SIZE_PAGE_MASK 0xFFF
#define CQSPI_REG_SIZE_BLOCK_MASK 0x3F
#define CQSPI_REG_SRAMPARTITION 0x18
#define CQSPI_REG_INDIRECTTRIGGER 0x1C
#define CQSPI_REG_DMA 0x20
#define CQSPI_REG_DMA_SINGLE_LSB 0
#define CQSPI_REG_DMA_BURST_LSB 8
#define CQSPI_REG_DMA_SINGLE_MASK 0xFF
#define CQSPI_REG_DMA_BURST_MASK 0xFF
#define CQSPI_REG_REMAP 0x24
#define CQSPI_REG_MODE_BIT 0x28
#define CQSPI_REG_SDRAMLEVEL 0x2C
#define CQSPI_REG_SDRAMLEVEL_RD_LSB 0
#define CQSPI_REG_SDRAMLEVEL_WR_LSB 16
#define CQSPI_REG_SDRAMLEVEL_RD_MASK 0xFFFF
#define CQSPI_REG_SDRAMLEVEL_WR_MASK 0xFFFF
#define CQSPI_REG_IRQSTATUS 0x40
#define CQSPI_REG_IRQMASK 0x44
#define CQSPI_REG_INDIRECTRD 0x60
#define CQSPI_REG_INDIRECTRD_START_MASK BIT(0)
#define CQSPI_REG_INDIRECTRD_CANCEL_MASK BIT(1)
#define CQSPI_REG_INDIRECTRD_DONE_MASK BIT(5)
#define CQSPI_REG_INDIRECTRDWATERMARK 0x64
#define CQSPI_REG_INDIRECTRDSTARTADDR 0x68
#define CQSPI_REG_INDIRECTRDBYTES 0x6C
#define CQSPI_REG_CMDCTRL 0x90
#define CQSPI_REG_CMDCTRL_EXECUTE_MASK BIT(0)
#define CQSPI_REG_CMDCTRL_INPROGRESS_MASK BIT(1)
#define CQSPI_REG_CMDCTRL_WR_BYTES_LSB 12
#define CQSPI_REG_CMDCTRL_WR_EN_LSB 15
#define CQSPI_REG_CMDCTRL_ADD_BYTES_LSB 16
#define CQSPI_REG_CMDCTRL_ADDR_EN_LSB 19
#define CQSPI_REG_CMDCTRL_RD_BYTES_LSB 20
#define CQSPI_REG_CMDCTRL_RD_EN_LSB 23
#define CQSPI_REG_CMDCTRL_OPCODE_LSB 24
#define CQSPI_REG_CMDCTRL_WR_BYTES_MASK 0x7
#define CQSPI_REG_CMDCTRL_ADD_BYTES_MASK 0x3
#define CQSPI_REG_CMDCTRL_RD_BYTES_MASK 0x7
#define CQSPI_REG_INDIRECTWR 0x70
#define CQSPI_REG_INDIRECTWR_START_MASK BIT(0)
#define CQSPI_REG_INDIRECTWR_CANCEL_MASK BIT(1)
#define CQSPI_REG_INDIRECTWR_DONE_MASK BIT(5)
#define CQSPI_REG_INDIRECTWRWATERMARK 0x74
#define CQSPI_REG_INDIRECTWRSTARTADDR 0x78
#define CQSPI_REG_INDIRECTWRBYTES 0x7C
#define CQSPI_REG_CMDADDRESS 0x94
#define CQSPI_REG_CMDREADDATALOWER 0xA0
#define CQSPI_REG_CMDREADDATAUPPER 0xA4
#define CQSPI_REG_CMDWRITEDATALOWER 0xA8
#define CQSPI_REG_CMDWRITEDATAUPPER 0xAC
/* Interrupt status bits */
#define CQSPI_REG_IRQ_MODE_ERR BIT(0)
#define CQSPI_REG_IRQ_UNDERFLOW BIT(1)
#define CQSPI_REG_IRQ_IND_COMP BIT(2)
#define CQSPI_REG_IRQ_IND_RD_REJECT BIT(3)
#define CQSPI_REG_IRQ_WR_PROTECTED_ERR BIT(4)
#define CQSPI_REG_IRQ_ILLEGAL_AHB_ERR BIT(5)
#define CQSPI_REG_IRQ_WATERMARK BIT(6)
#define CQSPI_REG_IRQ_IND_SRAM_FULL BIT(12)
#define CQSPI_IRQ_MASK_RD (CQSPI_REG_IRQ_WATERMARK | \
CQSPI_REG_IRQ_IND_SRAM_FULL | \
CQSPI_REG_IRQ_IND_COMP)
#define CQSPI_IRQ_MASK_WR (CQSPI_REG_IRQ_IND_COMP | \
CQSPI_REG_IRQ_WATERMARK | \
CQSPI_REG_IRQ_UNDERFLOW)
#define CQSPI_IRQ_STATUS_MASK 0x1FFFF
static int cqspi_wait_for_bit(void __iomem *reg, const u32 mask, bool clear)
{
unsigned long end = jiffies + msecs_to_jiffies(CQSPI_TIMEOUT_MS);
u32 val;
while (1) {
val = readl(reg);
if (clear)
val = ~val;
val &= mask;
if (val == mask)
return 0;
if (time_after(jiffies, end))
return -ETIMEDOUT;
}
}
static bool cqspi_is_idle(struct cqspi_st *cqspi)
{
u32 reg = readl(cqspi->iobase + CQSPI_REG_CONFIG);
return reg & (1 << CQSPI_REG_CONFIG_IDLE_LSB);
}
static u32 cqspi_get_rd_sram_level(struct cqspi_st *cqspi)
{
u32 reg = readl(cqspi->iobase + CQSPI_REG_SDRAMLEVEL);
reg >>= CQSPI_REG_SDRAMLEVEL_RD_LSB;
return reg & CQSPI_REG_SDRAMLEVEL_RD_MASK;
}
static irqreturn_t cqspi_irq_handler(int this_irq, void *dev)
{
struct cqspi_st *cqspi = dev;
unsigned int irq_status;
/* Read interrupt status */
irq_status = readl(cqspi->iobase + CQSPI_REG_IRQSTATUS);
/* Clear interrupt */
writel(irq_status, cqspi->iobase + CQSPI_REG_IRQSTATUS);
irq_status &= CQSPI_IRQ_MASK_RD | CQSPI_IRQ_MASK_WR;
if (irq_status)
complete(&cqspi->transfer_complete);
return IRQ_HANDLED;
}
static unsigned int cqspi_calc_rdreg(struct spi_nor *nor, const u8 opcode)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
u32 rdreg = 0;
rdreg |= f_pdata->inst_width << CQSPI_REG_RD_INSTR_TYPE_INSTR_LSB;
rdreg |= f_pdata->addr_width << CQSPI_REG_RD_INSTR_TYPE_ADDR_LSB;
rdreg |= f_pdata->data_width << CQSPI_REG_RD_INSTR_TYPE_DATA_LSB;
return rdreg;
}
static int cqspi_wait_idle(struct cqspi_st *cqspi)
{
const unsigned int poll_idle_retry = 3;
unsigned int count = 0;
unsigned long timeout;
timeout = jiffies + msecs_to_jiffies(CQSPI_TIMEOUT_MS);
while (1) {
/*
* Read few times in succession to ensure the controller
* is indeed idle, that is, the bit does not transition
* low again.
*/
if (cqspi_is_idle(cqspi))
count++;
else
count = 0;
if (count >= poll_idle_retry)
return 0;
if (time_after(jiffies, timeout)) {
/* Timeout, in busy mode. */
dev_err(&cqspi->pdev->dev,
"QSPI is still busy after %dms timeout.\n",
CQSPI_TIMEOUT_MS);
return -ETIMEDOUT;
}
cpu_relax();
}
}
static int cqspi_exec_flash_cmd(struct cqspi_st *cqspi, unsigned int reg)
{
void __iomem *reg_base = cqspi->iobase;
int ret;
/* Write the CMDCTRL without start execution. */
writel(reg, reg_base + CQSPI_REG_CMDCTRL);
/* Start execute */
reg |= CQSPI_REG_CMDCTRL_EXECUTE_MASK;
writel(reg, reg_base + CQSPI_REG_CMDCTRL);
/* Polling for completion. */
ret = cqspi_wait_for_bit(reg_base + CQSPI_REG_CMDCTRL,
CQSPI_REG_CMDCTRL_INPROGRESS_MASK, 1);
if (ret) {
dev_err(&cqspi->pdev->dev,
"Flash command execution timed out.\n");
return ret;
}
/* Polling QSPI idle status. */
return cqspi_wait_idle(cqspi);
}
static int cqspi_command_read(struct spi_nor *nor,
const u8 *txbuf, const unsigned n_tx,
u8 *rxbuf, const unsigned n_rx)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
void __iomem *reg_base = cqspi->iobase;
unsigned int rdreg;
unsigned int reg;
unsigned int read_len;
int status;
if (!n_rx || n_rx > CQSPI_STIG_DATA_LEN_MAX || !rxbuf) {
dev_err(nor->dev, "Invalid input argument, len %d rxbuf 0x%p\n",
n_rx, rxbuf);
return -EINVAL;
}
reg = txbuf[0] << CQSPI_REG_CMDCTRL_OPCODE_LSB;
rdreg = cqspi_calc_rdreg(nor, txbuf[0]);
writel(rdreg, reg_base + CQSPI_REG_RD_INSTR);
reg |= (0x1 << CQSPI_REG_CMDCTRL_RD_EN_LSB);
/* 0 means 1 byte. */
reg |= (((n_rx - 1) & CQSPI_REG_CMDCTRL_RD_BYTES_MASK)
<< CQSPI_REG_CMDCTRL_RD_BYTES_LSB);
status = cqspi_exec_flash_cmd(cqspi, reg);
if (status)
return status;
reg = readl(reg_base + CQSPI_REG_CMDREADDATALOWER);
/* Put the read value into rx_buf */
read_len = (n_rx > 4) ? 4 : n_rx;
memcpy(rxbuf, &reg, read_len);
rxbuf += read_len;
if (n_rx > 4) {
reg = readl(reg_base + CQSPI_REG_CMDREADDATAUPPER);
read_len = n_rx - read_len;
memcpy(rxbuf, &reg, read_len);
}
return 0;
}
static int cqspi_command_write(struct spi_nor *nor, const u8 opcode,
const u8 *txbuf, const unsigned n_tx)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
void __iomem *reg_base = cqspi->iobase;
unsigned int reg;
unsigned int data;
int ret;
if (n_tx > 4 || (n_tx && !txbuf)) {
dev_err(nor->dev,
"Invalid input argument, cmdlen %d txbuf 0x%p\n",
n_tx, txbuf);
return -EINVAL;
}
reg = opcode << CQSPI_REG_CMDCTRL_OPCODE_LSB;
if (n_tx) {
reg |= (0x1 << CQSPI_REG_CMDCTRL_WR_EN_LSB);
reg |= ((n_tx - 1) & CQSPI_REG_CMDCTRL_WR_BYTES_MASK)
<< CQSPI_REG_CMDCTRL_WR_BYTES_LSB;
data = 0;
memcpy(&data, txbuf, n_tx);
writel(data, reg_base + CQSPI_REG_CMDWRITEDATALOWER);
}
ret = cqspi_exec_flash_cmd(cqspi, reg);
return ret;
}
static int cqspi_command_write_addr(struct spi_nor *nor,
const u8 opcode, const unsigned int addr)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
void __iomem *reg_base = cqspi->iobase;
unsigned int reg;
reg = opcode << CQSPI_REG_CMDCTRL_OPCODE_LSB;
reg |= (0x1 << CQSPI_REG_CMDCTRL_ADDR_EN_LSB);
reg |= ((nor->addr_width - 1) & CQSPI_REG_CMDCTRL_ADD_BYTES_MASK)
<< CQSPI_REG_CMDCTRL_ADD_BYTES_LSB;
writel(addr, reg_base + CQSPI_REG_CMDADDRESS);
return cqspi_exec_flash_cmd(cqspi, reg);
}
static int cqspi_indirect_read_setup(struct spi_nor *nor,
const unsigned int from_addr)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
void __iomem *reg_base = cqspi->iobase;
unsigned int dummy_clk = 0;
unsigned int reg;
writel(from_addr, reg_base + CQSPI_REG_INDIRECTRDSTARTADDR);
reg = nor->read_opcode << CQSPI_REG_RD_INSTR_OPCODE_LSB;
reg |= cqspi_calc_rdreg(nor, nor->read_opcode);
/* Setup dummy clock cycles */
dummy_clk = nor->read_dummy;
if (dummy_clk > CQSPI_DUMMY_CLKS_MAX)
dummy_clk = CQSPI_DUMMY_CLKS_MAX;
if (dummy_clk / 8) {
reg |= (1 << CQSPI_REG_RD_INSTR_MODE_EN_LSB);
/* Set mode bits high to ensure chip doesn't enter XIP */
writel(0xFF, reg_base + CQSPI_REG_MODE_BIT);
/* Need to subtract the mode byte (8 clocks). */
if (f_pdata->inst_width != CQSPI_INST_TYPE_QUAD)
dummy_clk -= 8;
if (dummy_clk)
reg |= (dummy_clk & CQSPI_REG_RD_INSTR_DUMMY_MASK)
<< CQSPI_REG_RD_INSTR_DUMMY_LSB;
}
writel(reg, reg_base + CQSPI_REG_RD_INSTR);
/* Set address width */
reg = readl(reg_base + CQSPI_REG_SIZE);
reg &= ~CQSPI_REG_SIZE_ADDRESS_MASK;
reg |= (nor->addr_width - 1);
writel(reg, reg_base + CQSPI_REG_SIZE);
return 0;
}
static int cqspi_indirect_read_execute(struct spi_nor *nor,
u8 *rxbuf, const unsigned n_rx)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
void __iomem *reg_base = cqspi->iobase;
void __iomem *ahb_base = cqspi->ahb_base;
unsigned int remaining = n_rx;
unsigned int bytes_to_read = 0;
int ret = 0;
writel(remaining, reg_base + CQSPI_REG_INDIRECTRDBYTES);
/* Clear all interrupts. */
writel(CQSPI_IRQ_STATUS_MASK, reg_base + CQSPI_REG_IRQSTATUS);
writel(CQSPI_IRQ_MASK_RD, reg_base + CQSPI_REG_IRQMASK);
reinit_completion(&cqspi->transfer_complete);
writel(CQSPI_REG_INDIRECTRD_START_MASK,
reg_base + CQSPI_REG_INDIRECTRD);
while (remaining > 0) {
ret = wait_for_completion_timeout(&cqspi->transfer_complete,
msecs_to_jiffies
(CQSPI_READ_TIMEOUT_MS));
bytes_to_read = cqspi_get_rd_sram_level(cqspi);
if (!ret && bytes_to_read == 0) {
dev_err(nor->dev, "Indirect read timeout, no bytes\n");
ret = -ETIMEDOUT;
goto failrd;
}
while (bytes_to_read != 0) {
bytes_to_read *= cqspi->fifo_width;
bytes_to_read = bytes_to_read > remaining ?
remaining : bytes_to_read;
readsl(ahb_base, rxbuf, DIV_ROUND_UP(bytes_to_read, 4));
rxbuf += bytes_to_read;
remaining -= bytes_to_read;
bytes_to_read = cqspi_get_rd_sram_level(cqspi);
}
if (remaining > 0)
reinit_completion(&cqspi->transfer_complete);
}
/* Check indirect done status */
ret = cqspi_wait_for_bit(reg_base + CQSPI_REG_INDIRECTRD,
CQSPI_REG_INDIRECTRD_DONE_MASK, 0);
if (ret) {
dev_err(nor->dev,
"Indirect read completion error (%i)\n", ret);
goto failrd;
}
/* Disable interrupt */
writel(0, reg_base + CQSPI_REG_IRQMASK);
/* Clear indirect completion status */
writel(CQSPI_REG_INDIRECTRD_DONE_MASK, reg_base + CQSPI_REG_INDIRECTRD);
return 0;
failrd:
/* Disable interrupt */
writel(0, reg_base + CQSPI_REG_IRQMASK);
/* Cancel the indirect read */
writel(CQSPI_REG_INDIRECTWR_CANCEL_MASK,
reg_base + CQSPI_REG_INDIRECTRD);
return ret;
}
static int cqspi_indirect_write_setup(struct spi_nor *nor,
const unsigned int to_addr)
{
unsigned int reg;
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
void __iomem *reg_base = cqspi->iobase;
/* Set opcode. */
reg = nor->program_opcode << CQSPI_REG_WR_INSTR_OPCODE_LSB;
writel(reg, reg_base + CQSPI_REG_WR_INSTR);
reg = cqspi_calc_rdreg(nor, nor->program_opcode);
writel(reg, reg_base + CQSPI_REG_RD_INSTR);
writel(to_addr, reg_base + CQSPI_REG_INDIRECTWRSTARTADDR);
reg = readl(reg_base + CQSPI_REG_SIZE);
reg &= ~CQSPI_REG_SIZE_ADDRESS_MASK;
reg |= (nor->addr_width - 1);
writel(reg, reg_base + CQSPI_REG_SIZE);
return 0;
}
static int cqspi_indirect_write_execute(struct spi_nor *nor,
const u8 *txbuf, const unsigned n_tx)
{
const unsigned int page_size = nor->page_size;
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
void __iomem *reg_base = cqspi->iobase;
unsigned int remaining = n_tx;
unsigned int write_bytes;
int ret;
writel(remaining, reg_base + CQSPI_REG_INDIRECTWRBYTES);
/* Clear all interrupts. */
writel(CQSPI_IRQ_STATUS_MASK, reg_base + CQSPI_REG_IRQSTATUS);
writel(CQSPI_IRQ_MASK_WR, reg_base + CQSPI_REG_IRQMASK);
reinit_completion(&cqspi->transfer_complete);
writel(CQSPI_REG_INDIRECTWR_START_MASK,
reg_base + CQSPI_REG_INDIRECTWR);
while (remaining > 0) {
write_bytes = remaining > page_size ? page_size : remaining;
writesl(cqspi->ahb_base, txbuf, DIV_ROUND_UP(write_bytes, 4));
ret = wait_for_completion_timeout(&cqspi->transfer_complete,
msecs_to_jiffies
(CQSPI_TIMEOUT_MS));
if (!ret) {
dev_err(nor->dev, "Indirect write timeout\n");
ret = -ETIMEDOUT;
goto failwr;
}
txbuf += write_bytes;
remaining -= write_bytes;
if (remaining > 0)
reinit_completion(&cqspi->transfer_complete);
}
/* Check indirect done status */
ret = cqspi_wait_for_bit(reg_base + CQSPI_REG_INDIRECTWR,
CQSPI_REG_INDIRECTWR_DONE_MASK, 0);
if (ret) {
dev_err(nor->dev,
"Indirect write completion error (%i)\n", ret);
goto failwr;
}
/* Disable interrupt. */
writel(0, reg_base + CQSPI_REG_IRQMASK);
/* Clear indirect completion status */
writel(CQSPI_REG_INDIRECTWR_DONE_MASK, reg_base + CQSPI_REG_INDIRECTWR);
cqspi_wait_idle(cqspi);
return 0;
failwr:
/* Disable interrupt. */
writel(0, reg_base + CQSPI_REG_IRQMASK);
/* Cancel the indirect write */
writel(CQSPI_REG_INDIRECTWR_CANCEL_MASK,
reg_base + CQSPI_REG_INDIRECTWR);
return ret;
}
static void cqspi_chipselect(struct spi_nor *nor)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
void __iomem *reg_base = cqspi->iobase;
unsigned int chip_select = f_pdata->cs;
unsigned int reg;
reg = readl(reg_base + CQSPI_REG_CONFIG);
if (cqspi->is_decoded_cs) {
reg |= CQSPI_REG_CONFIG_DECODE_MASK;
} else {
reg &= ~CQSPI_REG_CONFIG_DECODE_MASK;
/* Convert CS if without decoder.
* CS0 to 4b'1110
* CS1 to 4b'1101
* CS2 to 4b'1011
* CS3 to 4b'0111
*/
chip_select = 0xF & ~(1 << chip_select);
}
reg &= ~(CQSPI_REG_CONFIG_CHIPSELECT_MASK
<< CQSPI_REG_CONFIG_CHIPSELECT_LSB);
reg |= (chip_select & CQSPI_REG_CONFIG_CHIPSELECT_MASK)
<< CQSPI_REG_CONFIG_CHIPSELECT_LSB;
writel(reg, reg_base + CQSPI_REG_CONFIG);
}
static void cqspi_configure_cs_and_sizes(struct spi_nor *nor)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
void __iomem *iobase = cqspi->iobase;
unsigned int reg;
/* configure page size and block size. */
reg = readl(iobase + CQSPI_REG_SIZE);
reg &= ~(CQSPI_REG_SIZE_PAGE_MASK << CQSPI_REG_SIZE_PAGE_LSB);
reg &= ~(CQSPI_REG_SIZE_BLOCK_MASK << CQSPI_REG_SIZE_BLOCK_LSB);
reg &= ~CQSPI_REG_SIZE_ADDRESS_MASK;
reg |= (nor->page_size << CQSPI_REG_SIZE_PAGE_LSB);
reg |= (ilog2(nor->mtd.erasesize) << CQSPI_REG_SIZE_BLOCK_LSB);
reg |= (nor->addr_width - 1);
writel(reg, iobase + CQSPI_REG_SIZE);
/* configure the chip select */
cqspi_chipselect(nor);
/* Store the new configuration of the controller */
cqspi->current_page_size = nor->page_size;
cqspi->current_erase_size = nor->mtd.erasesize;
cqspi->current_addr_width = nor->addr_width;
}
static unsigned int calculate_ticks_for_ns(const unsigned int ref_clk_hz,
const unsigned int ns_val)
{
unsigned int ticks;
ticks = ref_clk_hz / 1000; /* kHz */
ticks = DIV_ROUND_UP(ticks * ns_val, 1000000);
return ticks;
}
static void cqspi_delay(struct spi_nor *nor)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
void __iomem *iobase = cqspi->iobase;
const unsigned int ref_clk_hz = cqspi->master_ref_clk_hz;
unsigned int tshsl, tchsh, tslch, tsd2d;
unsigned int reg;
unsigned int tsclk;
/* calculate the number of ref ticks for one sclk tick */
tsclk = DIV_ROUND_UP(ref_clk_hz, cqspi->sclk);
tshsl = calculate_ticks_for_ns(ref_clk_hz, f_pdata->tshsl_ns);
/* this particular value must be at least one sclk */
if (tshsl < tsclk)
tshsl = tsclk;
tchsh = calculate_ticks_for_ns(ref_clk_hz, f_pdata->tchsh_ns);
tslch = calculate_ticks_for_ns(ref_clk_hz, f_pdata->tslch_ns);
tsd2d = calculate_ticks_for_ns(ref_clk_hz, f_pdata->tsd2d_ns);
reg = (tshsl & CQSPI_REG_DELAY_TSHSL_MASK)
<< CQSPI_REG_DELAY_TSHSL_LSB;
reg |= (tchsh & CQSPI_REG_DELAY_TCHSH_MASK)
<< CQSPI_REG_DELAY_TCHSH_LSB;
reg |= (tslch & CQSPI_REG_DELAY_TSLCH_MASK)
<< CQSPI_REG_DELAY_TSLCH_LSB;
reg |= (tsd2d & CQSPI_REG_DELAY_TSD2D_MASK)
<< CQSPI_REG_DELAY_TSD2D_LSB;
writel(reg, iobase + CQSPI_REG_DELAY);
}
static void cqspi_config_baudrate_div(struct cqspi_st *cqspi)
{
const unsigned int ref_clk_hz = cqspi->master_ref_clk_hz;
void __iomem *reg_base = cqspi->iobase;
u32 reg, div;
/* Recalculate the baudrate divisor based on QSPI specification. */
div = DIV_ROUND_UP(ref_clk_hz, 2 * cqspi->sclk) - 1;
reg = readl(reg_base + CQSPI_REG_CONFIG);
reg &= ~(CQSPI_REG_CONFIG_BAUD_MASK << CQSPI_REG_CONFIG_BAUD_LSB);
reg |= (div & CQSPI_REG_CONFIG_BAUD_MASK) << CQSPI_REG_CONFIG_BAUD_LSB;
writel(reg, reg_base + CQSPI_REG_CONFIG);
}
static void cqspi_readdata_capture(struct cqspi_st *cqspi,
const unsigned int bypass,
const unsigned int delay)
{
void __iomem *reg_base = cqspi->iobase;
unsigned int reg;
reg = readl(reg_base + CQSPI_REG_READCAPTURE);
if (bypass)
reg |= (1 << CQSPI_REG_READCAPTURE_BYPASS_LSB);
else
reg &= ~(1 << CQSPI_REG_READCAPTURE_BYPASS_LSB);
reg &= ~(CQSPI_REG_READCAPTURE_DELAY_MASK
<< CQSPI_REG_READCAPTURE_DELAY_LSB);
reg |= (delay & CQSPI_REG_READCAPTURE_DELAY_MASK)
<< CQSPI_REG_READCAPTURE_DELAY_LSB;
writel(reg, reg_base + CQSPI_REG_READCAPTURE);
}
static void cqspi_controller_enable(struct cqspi_st *cqspi, bool enable)
{
void __iomem *reg_base = cqspi->iobase;
unsigned int reg;
reg = readl(reg_base + CQSPI_REG_CONFIG);
if (enable)
reg |= CQSPI_REG_CONFIG_ENABLE_MASK;
else
reg &= ~CQSPI_REG_CONFIG_ENABLE_MASK;
writel(reg, reg_base + CQSPI_REG_CONFIG);
}
static void cqspi_configure(struct spi_nor *nor)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
const unsigned int sclk = f_pdata->clk_rate;
int switch_cs = (cqspi->current_cs != f_pdata->cs);
int switch_ck = (cqspi->sclk != sclk);
if ((cqspi->current_page_size != nor->page_size) ||
(cqspi->current_erase_size != nor->mtd.erasesize) ||
(cqspi->current_addr_width != nor->addr_width))
switch_cs = 1;
if (switch_cs || switch_ck)
cqspi_controller_enable(cqspi, 0);
/* Switch chip select. */
if (switch_cs) {
cqspi->current_cs = f_pdata->cs;
cqspi_configure_cs_and_sizes(nor);
}
/* Setup baudrate divisor and delays */
if (switch_ck) {
cqspi->sclk = sclk;
cqspi_config_baudrate_div(cqspi);
cqspi_delay(nor);
cqspi_readdata_capture(cqspi, 1, f_pdata->read_delay);
}
if (switch_cs || switch_ck)
cqspi_controller_enable(cqspi, 1);
}
static int cqspi_set_protocol(struct spi_nor *nor, const int read)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
f_pdata->inst_width = CQSPI_INST_TYPE_SINGLE;
f_pdata->addr_width = CQSPI_INST_TYPE_SINGLE;
f_pdata->data_width = CQSPI_INST_TYPE_SINGLE;
if (read) {
switch (nor->flash_read) {
case SPI_NOR_NORMAL:
case SPI_NOR_FAST:
f_pdata->data_width = CQSPI_INST_TYPE_SINGLE;
break;
case SPI_NOR_DUAL:
f_pdata->data_width = CQSPI_INST_TYPE_DUAL;
break;
case SPI_NOR_QUAD:
f_pdata->data_width = CQSPI_INST_TYPE_QUAD;
break;
default:
return -EINVAL;
}
}
cqspi_configure(nor);
return 0;
}
static ssize_t cqspi_write(struct spi_nor *nor, loff_t to,
size_t len, const u_char *buf)
{
int ret;
ret = cqspi_set_protocol(nor, 0);
if (ret)
return ret;
ret = cqspi_indirect_write_setup(nor, to);
if (ret)
return ret;
ret = cqspi_indirect_write_execute(nor, buf, len);
if (ret)
return ret;
return (ret < 0) ? ret : len;
}
static ssize_t cqspi_read(struct spi_nor *nor, loff_t from,
size_t len, u_char *buf)
{
int ret;
ret = cqspi_set_protocol(nor, 1);
if (ret)
return ret;
ret = cqspi_indirect_read_setup(nor, from);
if (ret)
return ret;
ret = cqspi_indirect_read_execute(nor, buf, len);
if (ret)
return ret;
return (ret < 0) ? ret : len;
}
static int cqspi_erase(struct spi_nor *nor, loff_t offs)
{
int ret;
ret = cqspi_set_protocol(nor, 0);
if (ret)
return ret;
/* Send write enable, then erase commands. */
ret = nor->write_reg(nor, SPINOR_OP_WREN, NULL, 0);
if (ret)
return ret;
/* Set up command buffer. */
ret = cqspi_command_write_addr(nor, nor->erase_opcode, offs);
if (ret)
return ret;
return 0;
}
static int cqspi_prep(struct spi_nor *nor, enum spi_nor_ops ops)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
mutex_lock(&cqspi->bus_mutex);
return 0;
}
static void cqspi_unprep(struct spi_nor *nor, enum spi_nor_ops ops)
{
struct cqspi_flash_pdata *f_pdata = nor->priv;
struct cqspi_st *cqspi = f_pdata->cqspi;
mutex_unlock(&cqspi->bus_mutex);
}
static int cqspi_read_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len)
{
int ret;
ret = cqspi_set_protocol(nor, 0);
if (!ret)
ret = cqspi_command_read(nor, &opcode, 1, buf, len);
return ret;
}
static int cqspi_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len)
{
int ret;
ret = cqspi_set_protocol(nor, 0);
if (!ret)
ret = cqspi_command_write(nor, opcode, buf, len);
return ret;
}
static int cqspi_of_get_flash_pdata(struct platform_device *pdev,
struct cqspi_flash_pdata *f_pdata,
struct device_node *np)
{
if (of_property_read_u32(np, "cdns,read-delay", &f_pdata->read_delay)) {
dev_err(&pdev->dev, "couldn't determine read-delay\n");
return -ENXIO;
}
if (of_property_read_u32(np, "cdns,tshsl-ns", &f_pdata->tshsl_ns)) {
dev_err(&pdev->dev, "couldn't determine tshsl-ns\n");
return -ENXIO;
}
if (of_property_read_u32(np, "cdns,tsd2d-ns", &f_pdata->tsd2d_ns)) {
dev_err(&pdev->dev, "couldn't determine tsd2d-ns\n");
return -ENXIO;
}
if (of_property_read_u32(np, "cdns,tchsh-ns", &f_pdata->tchsh_ns)) {
dev_err(&pdev->dev, "couldn't determine tchsh-ns\n");
return -ENXIO;
}
if (of_property_read_u32(np, "cdns,tslch-ns", &f_pdata->tslch_ns)) {
dev_err(&pdev->dev, "couldn't determine tslch-ns\n");
return -ENXIO;
}
if (of_property_read_u32(np, "spi-max-frequency", &f_pdata->clk_rate)) {
dev_err(&pdev->dev, "couldn't determine spi-max-frequency\n");
return -ENXIO;
}
return 0;
}
static int cqspi_of_get_pdata(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct cqspi_st *cqspi = platform_get_drvdata(pdev);
cqspi->is_decoded_cs = of_property_read_bool(np, "cdns,is-decoded-cs");
if (of_property_read_u32(np, "cdns,fifo-depth", &cqspi->fifo_depth)) {
dev_err(&pdev->dev, "couldn't determine fifo-depth\n");
return -ENXIO;
}
if (of_property_read_u32(np, "cdns,fifo-width", &cqspi->fifo_width)) {
dev_err(&pdev->dev, "couldn't determine fifo-width\n");
return -ENXIO;
}
if (of_property_read_u32(np, "cdns,trigger-address",
&cqspi->trigger_address)) {
dev_err(&pdev->dev, "couldn't determine trigger-address\n");
return -ENXIO;
}
return 0;
}
static void cqspi_controller_init(struct cqspi_st *cqspi)
{
cqspi_controller_enable(cqspi, 0);
/* Configure the remap address register, no remap */
writel(0, cqspi->iobase + CQSPI_REG_REMAP);
/* Disable all interrupts. */
writel(0, cqspi->iobase + CQSPI_REG_IRQMASK);
/* Configure the SRAM split to 1:1 . */
writel(cqspi->fifo_depth / 2, cqspi->iobase + CQSPI_REG_SRAMPARTITION);
/* Load indirect trigger address. */
writel(cqspi->trigger_address,
cqspi->iobase + CQSPI_REG_INDIRECTTRIGGER);
/* Program read watermark -- 1/2 of the FIFO. */
writel(cqspi->fifo_depth * cqspi->fifo_width / 2,
cqspi->iobase + CQSPI_REG_INDIRECTRDWATERMARK);
/* Program write watermark -- 1/8 of the FIFO. */
writel(cqspi->fifo_depth * cqspi->fifo_width / 8,
cqspi->iobase + CQSPI_REG_INDIRECTWRWATERMARK);
cqspi_controller_enable(cqspi, 1);
}
static int cqspi_setup_flash(struct cqspi_st *cqspi, struct device_node *np)
{
struct platform_device *pdev = cqspi->pdev;
struct device *dev = &pdev->dev;
struct cqspi_flash_pdata *f_pdata;
struct spi_nor *nor;
struct mtd_info *mtd;
unsigned int cs;
int i, ret;
/* Get flash device data */
for_each_available_child_of_node(dev->of_node, np) {
if (of_property_read_u32(np, "reg", &cs)) {
dev_err(dev, "Couldn't determine chip select.\n");
goto err;
}
if (cs > CQSPI_MAX_CHIPSELECT) {
dev_err(dev, "Chip select %d out of range.\n", cs);
goto err;
}
f_pdata = &cqspi->f_pdata[cs];
f_pdata->cqspi = cqspi;
f_pdata->cs = cs;
ret = cqspi_of_get_flash_pdata(pdev, f_pdata, np);
if (ret)
goto err;
nor = &f_pdata->nor;
mtd = &nor->mtd;
mtd->priv = nor;
nor->dev = dev;
spi_nor_set_flash_node(nor, np);
nor->priv = f_pdata;
nor->read_reg = cqspi_read_reg;
nor->write_reg = cqspi_write_reg;
nor->read = cqspi_read;
nor->write = cqspi_write;
nor->erase = cqspi_erase;
nor->prepare = cqspi_prep;
nor->unprepare = cqspi_unprep;
mtd->name = devm_kasprintf(dev, GFP_KERNEL, "%s.%d",
dev_name(dev), cs);
if (!mtd->name) {
ret = -ENOMEM;
goto err;
}
ret = spi_nor_scan(nor, NULL, SPI_NOR_QUAD);
if (ret)
goto err;
ret = mtd_device_register(mtd, NULL, 0);
if (ret)
goto err;
f_pdata->registered = true;
}
return 0;
err:
for (i = 0; i < CQSPI_MAX_CHIPSELECT; i++)
if (cqspi->f_pdata[i].registered)
mtd_device_unregister(&cqspi->f_pdata[i].nor.mtd);
return ret;
}
static int cqspi_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct device *dev = &pdev->dev;
struct cqspi_st *cqspi;
struct resource *res;
struct resource *res_ahb;
int ret;
int irq;
cqspi = devm_kzalloc(dev, sizeof(*cqspi), GFP_KERNEL);
if (!cqspi)
return -ENOMEM;
mutex_init(&cqspi->bus_mutex);
cqspi->pdev = pdev;
platform_set_drvdata(pdev, cqspi);
/* Obtain configuration from OF. */
ret = cqspi_of_get_pdata(pdev);
if (ret) {
dev_err(dev, "Cannot get mandatory OF data.\n");
return -ENODEV;
}
/* Obtain QSPI clock. */
cqspi->clk = devm_clk_get(dev, NULL);
if (IS_ERR(cqspi->clk)) {
dev_err(dev, "Cannot claim QSPI clock.\n");
return PTR_ERR(cqspi->clk);
}
/* Obtain and remap controller address. */
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
cqspi->iobase = devm_ioremap_resource(dev, res);
if (IS_ERR(cqspi->iobase)) {
dev_err(dev, "Cannot remap controller address.\n");
return PTR_ERR(cqspi->iobase);
}
/* Obtain and remap AHB address. */
res_ahb = platform_get_resource(pdev, IORESOURCE_MEM, 1);
cqspi->ahb_base = devm_ioremap_resource(dev, res_ahb);
if (IS_ERR(cqspi->ahb_base)) {
dev_err(dev, "Cannot remap AHB address.\n");
return PTR_ERR(cqspi->ahb_base);
}
init_completion(&cqspi->transfer_complete);
/* Obtain IRQ line. */
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
dev_err(dev, "Cannot obtain IRQ.\n");
return -ENXIO;
}
ret = clk_prepare_enable(cqspi->clk);
if (ret) {
dev_err(dev, "Cannot enable QSPI clock.\n");
return ret;
}
cqspi->master_ref_clk_hz = clk_get_rate(cqspi->clk);
ret = devm_request_irq(dev, irq, cqspi_irq_handler, 0,
pdev->name, cqspi);
if (ret) {
dev_err(dev, "Cannot request IRQ.\n");
goto probe_irq_failed;
}
cqspi_wait_idle(cqspi);
cqspi_controller_init(cqspi);
cqspi->current_cs = -1;
cqspi->sclk = 0;
ret = cqspi_setup_flash(cqspi, np);
if (ret) {
dev_err(dev, "Cadence QSPI NOR probe failed %d\n", ret);
goto probe_setup_failed;
}
return ret;
probe_irq_failed:
cqspi_controller_enable(cqspi, 0);
probe_setup_failed:
clk_disable_unprepare(cqspi->clk);
return ret;
}
static int cqspi_remove(struct platform_device *pdev)
{
struct cqspi_st *cqspi = platform_get_drvdata(pdev);
int i;
for (i = 0; i < CQSPI_MAX_CHIPSELECT; i++)
if (cqspi->f_pdata[i].registered)
mtd_device_unregister(&cqspi->f_pdata[i].nor.mtd);
cqspi_controller_enable(cqspi, 0);
clk_disable_unprepare(cqspi->clk);
return 0;
}
#ifdef CONFIG_PM_SLEEP
static int cqspi_suspend(struct device *dev)
{
struct cqspi_st *cqspi = dev_get_drvdata(dev);
cqspi_controller_enable(cqspi, 0);
return 0;
}
static int cqspi_resume(struct device *dev)
{
struct cqspi_st *cqspi = dev_get_drvdata(dev);
cqspi_controller_enable(cqspi, 1);
return 0;
}
static const struct dev_pm_ops cqspi__dev_pm_ops = {
.suspend = cqspi_suspend,
.resume = cqspi_resume,
};
#define CQSPI_DEV_PM_OPS (&cqspi__dev_pm_ops)
#else
#define CQSPI_DEV_PM_OPS NULL
#endif
static struct of_device_id const cqspi_dt_ids[] = {
{.compatible = "cdns,qspi-nor",},
{ /* end of table */ }
};
MODULE_DEVICE_TABLE(of, cqspi_dt_ids);
static struct platform_driver cqspi_platform_driver = {
.probe = cqspi_probe,
.remove = cqspi_remove,
.driver = {
.name = CQSPI_NAME,
.pm = CQSPI_DEV_PM_OPS,
.of_match_table = cqspi_dt_ids,
},
};
module_platform_driver(cqspi_platform_driver);
MODULE_DESCRIPTION("Cadence QSPI Controller Driver");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("platform:" CQSPI_NAME);
MODULE_AUTHOR("Ley Foon Tan <lftan@altera.com>");
MODULE_AUTHOR("Graham Moore <grmoore@opensource.altera.com>");
......@@ -618,9 +618,9 @@ static inline void fsl_qspi_invalid(struct fsl_qspi *q)
qspi_writel(q, reg, q->iobase + QUADSPI_MCR);
}
static int fsl_qspi_nor_write(struct fsl_qspi *q, struct spi_nor *nor,
static ssize_t fsl_qspi_nor_write(struct fsl_qspi *q, struct spi_nor *nor,
u8 opcode, unsigned int to, u32 *txbuf,
unsigned count, size_t *retlen)
unsigned count)
{
int ret, i, j;
u32 tmp;
......@@ -647,8 +647,8 @@ static int fsl_qspi_nor_write(struct fsl_qspi *q, struct spi_nor *nor,
/* Trigger it */
ret = fsl_qspi_runcmd(q, opcode, to, count);
if (ret == 0 && retlen)
*retlen += count;
if (ret == 0)
return count;
return ret;
}
......@@ -859,7 +859,9 @@ static int fsl_qspi_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len)
} else if (len > 0) {
ret = fsl_qspi_nor_write(q, nor, opcode, 0,
(u32 *)buf, len, NULL);
(u32 *)buf, len);
if (ret > 0)
return 0;
} else {
dev_err(q->dev, "invalid cmd %d\n", opcode);
ret = -EINVAL;
......@@ -868,20 +870,20 @@ static int fsl_qspi_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len)
return ret;
}
static void fsl_qspi_write(struct spi_nor *nor, loff_t to,
size_t len, size_t *retlen, const u_char *buf)
static ssize_t fsl_qspi_write(struct spi_nor *nor, loff_t to,
size_t len, const u_char *buf)
{
struct fsl_qspi *q = nor->priv;
fsl_qspi_nor_write(q, nor, nor->program_opcode, to,
(u32 *)buf, len, retlen);
ssize_t ret = fsl_qspi_nor_write(q, nor, nor->program_opcode, to,
(u32 *)buf, len);
/* invalid the data in the AHB buffer. */
fsl_qspi_invalid(q);
return ret;
}
static int fsl_qspi_read(struct spi_nor *nor, loff_t from,
size_t len, size_t *retlen, u_char *buf)
static ssize_t fsl_qspi_read(struct spi_nor *nor, loff_t from,
size_t len, u_char *buf)
{
struct fsl_qspi *q = nor->priv;
u8 cmd = nor->read_opcode;
......@@ -923,8 +925,7 @@ static int fsl_qspi_read(struct spi_nor *nor, loff_t from,
memcpy(buf, q->ahb_addr + q->chip_base_addr + from - q->memmap_offs,
len);
*retlen += len;
return 0;
return len;
}
static int fsl_qspi_erase(struct spi_nor *nor, loff_t offs)
......
/*
* HiSilicon SPI Nor Flash Controller Driver
*
* Copyright (c) 2015-2016 HiSilicon Technologies Co., Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/bitops.h>
#include <linux/clk.h>
#include <linux/dma-mapping.h>
#include <linux/iopoll.h>
#include <linux/module.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/spi-nor.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
/* Hardware register offsets and field definitions */
#define FMC_CFG 0x00
#define FMC_CFG_OP_MODE_MASK BIT_MASK(0)
#define FMC_CFG_OP_MODE_BOOT 0
#define FMC_CFG_OP_MODE_NORMAL 1
#define FMC_CFG_FLASH_SEL(type) (((type) & 0x3) << 1)
#define FMC_CFG_FLASH_SEL_MASK 0x6
#define FMC_ECC_TYPE(type) (((type) & 0x7) << 5)
#define FMC_ECC_TYPE_MASK GENMASK(7, 5)
#define SPI_NOR_ADDR_MODE_MASK BIT_MASK(10)
#define SPI_NOR_ADDR_MODE_3BYTES (0x0 << 10)
#define SPI_NOR_ADDR_MODE_4BYTES (0x1 << 10)
#define FMC_GLOBAL_CFG 0x04
#define FMC_GLOBAL_CFG_WP_ENABLE BIT(6)
#define FMC_SPI_TIMING_CFG 0x08
#define TIMING_CFG_TCSH(nr) (((nr) & 0xf) << 8)
#define TIMING_CFG_TCSS(nr) (((nr) & 0xf) << 4)
#define TIMING_CFG_TSHSL(nr) ((nr) & 0xf)
#define CS_HOLD_TIME 0x6
#define CS_SETUP_TIME 0x6
#define CS_DESELECT_TIME 0xf
#define FMC_INT 0x18
#define FMC_INT_OP_DONE BIT(0)
#define FMC_INT_CLR 0x20
#define FMC_CMD 0x24
#define FMC_CMD_CMD1(cmd) ((cmd) & 0xff)
#define FMC_ADDRL 0x2c
#define FMC_OP_CFG 0x30
#define OP_CFG_FM_CS(cs) ((cs) << 11)
#define OP_CFG_MEM_IF_TYPE(type) (((type) & 0x7) << 7)
#define OP_CFG_ADDR_NUM(addr) (((addr) & 0x7) << 4)
#define OP_CFG_DUMMY_NUM(dummy) ((dummy) & 0xf)
#define FMC_DATA_NUM 0x38
#define FMC_DATA_NUM_CNT(cnt) ((cnt) & GENMASK(13, 0))
#define FMC_OP 0x3c
#define FMC_OP_DUMMY_EN BIT(8)
#define FMC_OP_CMD1_EN BIT(7)
#define FMC_OP_ADDR_EN BIT(6)
#define FMC_OP_WRITE_DATA_EN BIT(5)
#define FMC_OP_READ_DATA_EN BIT(2)
#define FMC_OP_READ_STATUS_EN BIT(1)
#define FMC_OP_REG_OP_START BIT(0)
#define FMC_DMA_LEN 0x40
#define FMC_DMA_LEN_SET(len) ((len) & GENMASK(27, 0))
#define FMC_DMA_SADDR_D0 0x4c
#define HIFMC_DMA_MAX_LEN (4096)
#define HIFMC_DMA_MASK (HIFMC_DMA_MAX_LEN - 1)
#define FMC_OP_DMA 0x68
#define OP_CTRL_RD_OPCODE(code) (((code) & 0xff) << 16)
#define OP_CTRL_WR_OPCODE(code) (((code) & 0xff) << 8)
#define OP_CTRL_RW_OP(op) ((op) << 1)
#define OP_CTRL_DMA_OP_READY BIT(0)
#define FMC_OP_READ 0x0
#define FMC_OP_WRITE 0x1
#define FMC_WAIT_TIMEOUT 1000000
enum hifmc_iftype {
IF_TYPE_STD,
IF_TYPE_DUAL,
IF_TYPE_DIO,
IF_TYPE_QUAD,
IF_TYPE_QIO,
};
struct hifmc_priv {
u32 chipselect;
u32 clkrate;
struct hifmc_host *host;
};
#define HIFMC_MAX_CHIP_NUM 2
struct hifmc_host {
struct device *dev;
struct mutex lock;
void __iomem *regbase;
void __iomem *iobase;
struct clk *clk;
void *buffer;
dma_addr_t dma_buffer;
struct spi_nor *nor[HIFMC_MAX_CHIP_NUM];
u32 num_chip;
};
static inline int wait_op_finish(struct hifmc_host *host)
{
u32 reg;
return readl_poll_timeout(host->regbase + FMC_INT, reg,
(reg & FMC_INT_OP_DONE), 0, FMC_WAIT_TIMEOUT);
}
static int get_if_type(enum read_mode flash_read)
{
enum hifmc_iftype if_type;
switch (flash_read) {
case SPI_NOR_DUAL:
if_type = IF_TYPE_DUAL;
break;
case SPI_NOR_QUAD:
if_type = IF_TYPE_QUAD;
break;
case SPI_NOR_NORMAL:
case SPI_NOR_FAST:
default:
if_type = IF_TYPE_STD;
break;
}
return if_type;
}
static void hisi_spi_nor_init(struct hifmc_host *host)
{
u32 reg;
reg = TIMING_CFG_TCSH(CS_HOLD_TIME)
| TIMING_CFG_TCSS(CS_SETUP_TIME)
| TIMING_CFG_TSHSL(CS_DESELECT_TIME);
writel(reg, host->regbase + FMC_SPI_TIMING_CFG);
}
static int hisi_spi_nor_prep(struct spi_nor *nor, enum spi_nor_ops ops)
{
struct hifmc_priv *priv = nor->priv;
struct hifmc_host *host = priv->host;
int ret;
mutex_lock(&host->lock);
ret = clk_set_rate(host->clk, priv->clkrate);
if (ret)
goto out;
ret = clk_prepare_enable(host->clk);
if (ret)
goto out;
return 0;
out:
mutex_unlock(&host->lock);
return ret;
}
static void hisi_spi_nor_unprep(struct spi_nor *nor, enum spi_nor_ops ops)
{
struct hifmc_priv *priv = nor->priv;
struct hifmc_host *host = priv->host;
clk_disable_unprepare(host->clk);
mutex_unlock(&host->lock);
}
static int hisi_spi_nor_op_reg(struct spi_nor *nor,
u8 opcode, int len, u8 optype)
{
struct hifmc_priv *priv = nor->priv;
struct hifmc_host *host = priv->host;
u32 reg;
reg = FMC_CMD_CMD1(opcode);
writel(reg, host->regbase + FMC_CMD);
reg = FMC_DATA_NUM_CNT(len);
writel(reg, host->regbase + FMC_DATA_NUM);
reg = OP_CFG_FM_CS(priv->chipselect);
writel(reg, host->regbase + FMC_OP_CFG);
writel(0xff, host->regbase + FMC_INT_CLR);
reg = FMC_OP_CMD1_EN | FMC_OP_REG_OP_START | optype;
writel(reg, host->regbase + FMC_OP);
return wait_op_finish(host);
}
static int hisi_spi_nor_read_reg(struct spi_nor *nor, u8 opcode, u8 *buf,
int len)
{
struct hifmc_priv *priv = nor->priv;
struct hifmc_host *host = priv->host;
int ret;
ret = hisi_spi_nor_op_reg(nor, opcode, len, FMC_OP_READ_DATA_EN);
if (ret)
return ret;
memcpy_fromio(buf, host->iobase, len);
return 0;
}
static int hisi_spi_nor_write_reg(struct spi_nor *nor, u8 opcode,
u8 *buf, int len)
{
struct hifmc_priv *priv = nor->priv;
struct hifmc_host *host = priv->host;
if (len)
memcpy_toio(host->iobase, buf, len);
return hisi_spi_nor_op_reg(nor, opcode, len, FMC_OP_WRITE_DATA_EN);
}
static int hisi_spi_nor_dma_transfer(struct spi_nor *nor, loff_t start_off,
dma_addr_t dma_buf, size_t len, u8 op_type)
{
struct hifmc_priv *priv = nor->priv;
struct hifmc_host *host = priv->host;
u8 if_type = 0;
u32 reg;
reg = readl(host->regbase + FMC_CFG);
reg &= ~(FMC_CFG_OP_MODE_MASK | SPI_NOR_ADDR_MODE_MASK);
reg |= FMC_CFG_OP_MODE_NORMAL;
reg |= (nor->addr_width == 4) ? SPI_NOR_ADDR_MODE_4BYTES
: SPI_NOR_ADDR_MODE_3BYTES;
writel(reg, host->regbase + FMC_CFG);
writel(start_off, host->regbase + FMC_ADDRL);
writel(dma_buf, host->regbase + FMC_DMA_SADDR_D0);
writel(FMC_DMA_LEN_SET(len), host->regbase + FMC_DMA_LEN);
reg = OP_CFG_FM_CS(priv->chipselect);
if_type = get_if_type(nor->flash_read);
reg |= OP_CFG_MEM_IF_TYPE(if_type);
if (op_type == FMC_OP_READ)
reg |= OP_CFG_DUMMY_NUM(nor->read_dummy >> 3);
writel(reg, host->regbase + FMC_OP_CFG);
writel(0xff, host->regbase + FMC_INT_CLR);
reg = OP_CTRL_RW_OP(op_type) | OP_CTRL_DMA_OP_READY;
reg |= (op_type == FMC_OP_READ)
? OP_CTRL_RD_OPCODE(nor->read_opcode)
: OP_CTRL_WR_OPCODE(nor->program_opcode);
writel(reg, host->regbase + FMC_OP_DMA);
return wait_op_finish(host);
}
static ssize_t hisi_spi_nor_read(struct spi_nor *nor, loff_t from, size_t len,
u_char *read_buf)
{
struct hifmc_priv *priv = nor->priv;
struct hifmc_host *host = priv->host;
size_t offset;
int ret;
for (offset = 0; offset < len; offset += HIFMC_DMA_MAX_LEN) {
size_t trans = min_t(size_t, HIFMC_DMA_MAX_LEN, len - offset);
ret = hisi_spi_nor_dma_transfer(nor,
from + offset, host->dma_buffer, trans, FMC_OP_READ);
if (ret) {
dev_warn(nor->dev, "DMA read timeout\n");
return ret;
}
memcpy(read_buf + offset, host->buffer, trans);
}
return len;
}
static ssize_t hisi_spi_nor_write(struct spi_nor *nor, loff_t to,
size_t len, const u_char *write_buf)
{
struct hifmc_priv *priv = nor->priv;
struct hifmc_host *host = priv->host;
size_t offset;
int ret;
for (offset = 0; offset < len; offset += HIFMC_DMA_MAX_LEN) {
size_t trans = min_t(size_t, HIFMC_DMA_MAX_LEN, len - offset);
memcpy(host->buffer, write_buf + offset, trans);
ret = hisi_spi_nor_dma_transfer(nor,
to + offset, host->dma_buffer, trans, FMC_OP_WRITE);
if (ret) {
dev_warn(nor->dev, "DMA write timeout\n");
return ret;
}
}
return len;
}
/**
* Get spi flash device information and register it as a mtd device.
*/
static int hisi_spi_nor_register(struct device_node *np,
struct hifmc_host *host)
{
struct device *dev = host->dev;
struct spi_nor *nor;
struct hifmc_priv *priv;
struct mtd_info *mtd;
int ret;
nor = devm_kzalloc(dev, sizeof(*nor), GFP_KERNEL);
if (!nor)
return -ENOMEM;
nor->dev = dev;
spi_nor_set_flash_node(nor, np);
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
ret = of_property_read_u32(np, "reg", &priv->chipselect);
if (ret) {
dev_err(dev, "There's no reg property for %s\n",
np->full_name);
return ret;
}
ret = of_property_read_u32(np, "spi-max-frequency",
&priv->clkrate);
if (ret) {
dev_err(dev, "There's no spi-max-frequency property for %s\n",
np->full_name);
return ret;
}
priv->host = host;
nor->priv = priv;
nor->prepare = hisi_spi_nor_prep;
nor->unprepare = hisi_spi_nor_unprep;
nor->read_reg = hisi_spi_nor_read_reg;
nor->write_reg = hisi_spi_nor_write_reg;
nor->read = hisi_spi_nor_read;
nor->write = hisi_spi_nor_write;
nor->erase = NULL;
ret = spi_nor_scan(nor, NULL, SPI_NOR_QUAD);
if (ret)
return ret;
mtd = &nor->mtd;
mtd->name = np->name;
ret = mtd_device_register(mtd, NULL, 0);
if (ret)
return ret;
host->nor[host->num_chip] = nor;
host->num_chip++;
return 0;
}
static void hisi_spi_nor_unregister_all(struct hifmc_host *host)
{
int i;
for (i = 0; i < host->num_chip; i++)
mtd_device_unregister(&host->nor[i]->mtd);
}
static int hisi_spi_nor_register_all(struct hifmc_host *host)
{
struct device *dev = host->dev;
struct device_node *np;
int ret;
for_each_available_child_of_node(dev->of_node, np) {
ret = hisi_spi_nor_register(np, host);
if (ret)
goto fail;
if (host->num_chip == HIFMC_MAX_CHIP_NUM) {
dev_warn(dev, "Flash device number exceeds the maximum chipselect number\n");
break;
}
}
return 0;
fail:
hisi_spi_nor_unregister_all(host);
return ret;
}
static int hisi_spi_nor_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct resource *res;
struct hifmc_host *host;
int ret;
host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL);
if (!host)
return -ENOMEM;
platform_set_drvdata(pdev, host);
host->dev = dev;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "control");
host->regbase = devm_ioremap_resource(dev, res);
if (IS_ERR(host->regbase))
return PTR_ERR(host->regbase);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "memory");
host->iobase = devm_ioremap_resource(dev, res);
if (IS_ERR(host->iobase))
return PTR_ERR(host->iobase);
host->clk = devm_clk_get(dev, NULL);
if (IS_ERR(host->clk))
return PTR_ERR(host->clk);
ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
if (ret) {
dev_warn(dev, "Unable to set dma mask\n");
return ret;
}
host->buffer = dmam_alloc_coherent(dev, HIFMC_DMA_MAX_LEN,
&host->dma_buffer, GFP_KERNEL);
if (!host->buffer)
return -ENOMEM;
mutex_init(&host->lock);
clk_prepare_enable(host->clk);
hisi_spi_nor_init(host);
ret = hisi_spi_nor_register_all(host);
if (ret)
mutex_destroy(&host->lock);
clk_disable_unprepare(host->clk);
return ret;
}
static int hisi_spi_nor_remove(struct platform_device *pdev)
{
struct hifmc_host *host = platform_get_drvdata(pdev);
hisi_spi_nor_unregister_all(host);
mutex_destroy(&host->lock);
clk_disable_unprepare(host->clk);
return 0;
}
static const struct of_device_id hisi_spi_nor_dt_ids[] = {
{ .compatible = "hisilicon,fmc-spi-nor"},
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, hisi_spi_nor_dt_ids);
static struct platform_driver hisi_spi_nor_driver = {
.driver = {
.name = "hisi-sfc",
.of_match_table = hisi_spi_nor_dt_ids,
},
.probe = hisi_spi_nor_probe,
.remove = hisi_spi_nor_remove,
};
module_platform_driver(hisi_spi_nor_driver);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("HiSilicon SPI Nor Flash Controller Driver");
......@@ -21,7 +21,6 @@
#include <linux/ioport.h>
#include <linux/math64.h>
#include <linux/module.h>
#include <linux/mtd/mtd.h>
#include <linux/mutex.h>
#include <linux/of.h>
#include <linux/of_device.h>
......@@ -243,8 +242,8 @@ static void mt8173_nor_set_addr(struct mt8173_nor *mt8173_nor, u32 addr)
writeb(addr & 0xff, mt8173_nor->base + MTK_NOR_RADR3_REG);
}
static int mt8173_nor_read(struct spi_nor *nor, loff_t from, size_t length,
size_t *retlen, u_char *buffer)
static ssize_t mt8173_nor_read(struct spi_nor *nor, loff_t from, size_t length,
u_char *buffer)
{
int i, ret;
int addr = (int)from;
......@@ -255,13 +254,13 @@ static int mt8173_nor_read(struct spi_nor *nor, loff_t from, size_t length,
mt8173_nor_set_read_mode(mt8173_nor);
mt8173_nor_set_addr(mt8173_nor, addr);
for (i = 0; i < length; i++, (*retlen)++) {
for (i = 0; i < length; i++) {
ret = mt8173_nor_execute_cmd(mt8173_nor, MTK_NOR_PIO_READ_CMD);
if (ret < 0)
return ret;
buf[i] = readb(mt8173_nor->base + MTK_NOR_RDATA_REG);
}
return 0;
return length;
}
static int mt8173_nor_write_single_byte(struct mt8173_nor *mt8173_nor,
......@@ -297,36 +296,44 @@ static int mt8173_nor_write_buffer(struct mt8173_nor *mt8173_nor, int addr,
return mt8173_nor_execute_cmd(mt8173_nor, MTK_NOR_WR_CMD);
}
static void mt8173_nor_write(struct spi_nor *nor, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
static ssize_t mt8173_nor_write(struct spi_nor *nor, loff_t to, size_t len,
const u_char *buf)
{
int ret;
struct mt8173_nor *mt8173_nor = nor->priv;
size_t i;
ret = mt8173_nor_write_buffer_enable(mt8173_nor);
if (ret < 0)
if (ret < 0) {
dev_warn(mt8173_nor->dev, "write buffer enable failed!\n");
return ret;
}
while (len >= SFLASH_WRBUF_SIZE) {
for (i = 0; i + SFLASH_WRBUF_SIZE <= len; i += SFLASH_WRBUF_SIZE) {
ret = mt8173_nor_write_buffer(mt8173_nor, to, buf);
if (ret < 0)
if (ret < 0) {
dev_err(mt8173_nor->dev, "write buffer failed!\n");
len -= SFLASH_WRBUF_SIZE;
return ret;
}
to += SFLASH_WRBUF_SIZE;
buf += SFLASH_WRBUF_SIZE;
(*retlen) += SFLASH_WRBUF_SIZE;
}
ret = mt8173_nor_write_buffer_disable(mt8173_nor);
if (ret < 0)
if (ret < 0) {
dev_warn(mt8173_nor->dev, "write buffer disable failed!\n");
return ret;
}
if (len) {
ret = mt8173_nor_write_single_byte(mt8173_nor, to, (int)len,
(u8 *)buf);
if (ret < 0)
if (i < len) {
ret = mt8173_nor_write_single_byte(mt8173_nor, to,
(int)(len - i), (u8 *)buf);
if (ret < 0) {
dev_err(mt8173_nor->dev, "write single byte failed!\n");
(*retlen) += len;
return ret;
}
}
return len;
}
static int mt8173_nor_read_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len)
......
......@@ -172,8 +172,8 @@ static int nxp_spifi_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len)
return nxp_spifi_wait_for_cmd(spifi);
}
static int nxp_spifi_read(struct spi_nor *nor, loff_t from, size_t len,
size_t *retlen, u_char *buf)
static ssize_t nxp_spifi_read(struct spi_nor *nor, loff_t from, size_t len,
u_char *buf)
{
struct nxp_spifi *spifi = nor->priv;
int ret;
......@@ -183,24 +183,23 @@ static int nxp_spifi_read(struct spi_nor *nor, loff_t from, size_t len,
return ret;
memcpy_fromio(buf, spifi->flash_base + from, len);
*retlen += len;
return 0;
return len;
}
static void nxp_spifi_write(struct spi_nor *nor, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
static ssize_t nxp_spifi_write(struct spi_nor *nor, loff_t to, size_t len,
const u_char *buf)
{
struct nxp_spifi *spifi = nor->priv;
u32 cmd;
int ret;
size_t i;
ret = nxp_spifi_set_memory_mode_off(spifi);
if (ret)
return;
return ret;
writel(to, spifi->io_base + SPIFI_ADDR);
*retlen += len;
cmd = SPIFI_CMD_DOUT |
SPIFI_CMD_DATALEN(len) |
......@@ -209,10 +208,14 @@ static void nxp_spifi_write(struct spi_nor *nor, loff_t to, size_t len,
SPIFI_CMD_FRAMEFORM(spifi->nor.addr_width + 1);
writel(cmd, spifi->io_base + SPIFI_CMD);
while (len--)
writeb(*buf++, spifi->io_base + SPIFI_DATA);
for (i = 0; i < len; i++)
writeb(buf[i], spifi->io_base + SPIFI_DATA);
ret = nxp_spifi_wait_for_cmd(spifi);
if (ret)
return ret;
nxp_spifi_wait_for_cmd(spifi);
return len;
}
static int nxp_spifi_erase(struct spi_nor *nor, loff_t offs)
......
......@@ -661,7 +661,7 @@ static int stm_unlock(struct spi_nor *nor, loff_t ofs, uint64_t len)
status_new = (status_old & ~mask & ~SR_TB) | val;
/* Don't protect status register if we're fully unlocked */
if (lock_len == mtd->size)
if (lock_len == 0)
status_new &= ~SR_SRWD;
if (!use_top)
......@@ -830,10 +830,26 @@ static const struct flash_info spi_nor_ids[] = {
{ "mb85rs1mt", INFO(0x047f27, 0, 128 * 1024, 1, SPI_NOR_NO_ERASE) },
/* GigaDevice */
{ "gd25q32", INFO(0xc84016, 0, 64 * 1024, 64, SECT_4K) },
{ "gd25q64", INFO(0xc84017, 0, 64 * 1024, 128, SECT_4K) },
{ "gd25lq64c", INFO(0xc86017, 0, 64 * 1024, 128, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
{ "gd25q128", INFO(0xc84018, 0, 64 * 1024, 256, SECT_4K) },
{
"gd25q32", INFO(0xc84016, 0, 64 * 1024, 64,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB)
},
{
"gd25q64", INFO(0xc84017, 0, 64 * 1024, 128,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB)
},
{
"gd25lq64c", INFO(0xc86017, 0, 64 * 1024, 128,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB)
},
{
"gd25q128", INFO(0xc84018, 0, 64 * 1024, 256,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB)
},
/* Intel/Numonyx -- xxxs33b */
{ "160s33b", INFO(0x898911, 0, 64 * 1024, 32, 0) },
......@@ -871,6 +887,7 @@ static const struct flash_info spi_nor_ids[] = {
{ "n25q512a", INFO(0x20bb20, 0, 64 * 1024, 1024, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) },
{ "n25q512ax3", INFO(0x20ba20, 0, 64 * 1024, 1024, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) },
{ "n25q00", INFO(0x20ba21, 0, 64 * 1024, 2048, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) },
{ "n25q00a", INFO(0x20bb21, 0, 64 * 1024, 2048, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) },
/* PMC */
{ "pm25lv512", INFO(0, 0, 32 * 1024, 2, SECT_4K_PMC) },
......@@ -1031,8 +1048,25 @@ static int spi_nor_read(struct mtd_info *mtd, loff_t from, size_t len,
if (ret)
return ret;
ret = nor->read(nor, from, len, retlen, buf);
while (len) {
ret = nor->read(nor, from, len, buf);
if (ret == 0) {
/* We shouldn't see 0-length reads */
ret = -EIO;
goto read_err;
}
if (ret < 0)
goto read_err;
WARN_ON(ret > len);
*retlen += ret;
buf += ret;
from += ret;
len -= ret;
}
ret = 0;
read_err:
spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_READ);
return ret;
}
......@@ -1060,10 +1094,14 @@ static int sst_write(struct mtd_info *mtd, loff_t to, size_t len,
nor->program_opcode = SPINOR_OP_BP;
/* write one byte. */
nor->write(nor, to, 1, retlen, buf);
ret = nor->write(nor, to, 1, buf);
if (ret < 0)
goto sst_write_err;
WARN(ret != 1, "While writing 1 byte written %i bytes\n",
(int)ret);
ret = spi_nor_wait_till_ready(nor);
if (ret)
goto time_out;
goto sst_write_err;
}
to += actual;
......@@ -1072,10 +1110,14 @@ static int sst_write(struct mtd_info *mtd, loff_t to, size_t len,
nor->program_opcode = SPINOR_OP_AAI_WP;
/* write two bytes. */
nor->write(nor, to, 2, retlen, buf + actual);
ret = nor->write(nor, to, 2, buf + actual);
if (ret < 0)
goto sst_write_err;
WARN(ret != 2, "While writing 2 bytes written %i bytes\n",
(int)ret);
ret = spi_nor_wait_till_ready(nor);
if (ret)
goto time_out;
goto sst_write_err;
to += 2;
nor->sst_write_second = true;
}
......@@ -1084,21 +1126,26 @@ static int sst_write(struct mtd_info *mtd, loff_t to, size_t len,
write_disable(nor);
ret = spi_nor_wait_till_ready(nor);
if (ret)
goto time_out;
goto sst_write_err;
/* Write out trailing byte if it exists. */
if (actual != len) {
write_enable(nor);
nor->program_opcode = SPINOR_OP_BP;
nor->write(nor, to, 1, retlen, buf + actual);
ret = nor->write(nor, to, 1, buf + actual);
if (ret < 0)
goto sst_write_err;
WARN(ret != 1, "While writing 1 byte written %i bytes\n",
(int)ret);
ret = spi_nor_wait_till_ready(nor);
if (ret)
goto time_out;
goto sst_write_err;
write_disable(nor);
actual += 1;
}
time_out:
sst_write_err:
*retlen += actual;
spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_WRITE);
return ret;
}
......@@ -1112,8 +1159,8 @@ static int spi_nor_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
u32 page_offset, page_size, i;
int ret;
size_t page_offset, page_remain, i;
ssize_t ret;
dev_dbg(nor->dev, "to 0x%08x, len %zd\n", (u32)to, len);
......@@ -1121,35 +1168,37 @@ static int spi_nor_write(struct mtd_info *mtd, loff_t to, size_t len,
if (ret)
return ret;
write_enable(nor);
page_offset = to & (nor->page_size - 1);
for (i = 0; i < len; ) {
ssize_t written;
/* do all the bytes fit onto one page? */
if (page_offset + len <= nor->page_size) {
nor->write(nor, to, len, retlen, buf);
} else {
page_offset = (to + i) & (nor->page_size - 1);
WARN_ONCE(page_offset,
"Writing at offset %zu into a NOR page. Writing partial pages may decrease reliability and increase wear of NOR flash.",
page_offset);
/* the size of data remaining on the first page */
page_size = nor->page_size - page_offset;
nor->write(nor, to, page_size, retlen, buf);
/* write everything in nor->page_size chunks */
for (i = page_size; i < len; i += page_size) {
page_size = len - i;
if (page_size > nor->page_size)
page_size = nor->page_size;
page_remain = min_t(size_t,
nor->page_size - page_offset, len - i);
ret = spi_nor_wait_till_ready(nor);
if (ret)
goto write_err;
write_enable(nor);
write_enable(nor);
ret = nor->write(nor, to + i, page_remain, buf + i);
if (ret < 0)
goto write_err;
written = ret;
nor->write(nor, to + i, page_size, retlen, buf + i);
ret = spi_nor_wait_till_ready(nor);
if (ret)
goto write_err;
*retlen += written;
i += written;
if (written != page_remain) {
dev_err(nor->dev,
"While writing %zu bytes written %zd bytes\n",
page_remain, written);
ret = -EIO;
goto write_err;
}
}
ret = spi_nor_wait_till_ready(nor);
write_err:
spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_WRITE);
return ret;
......
......@@ -380,8 +380,7 @@ static int ssfdcr_readsect(struct mtd_blktrans_dev *dev,
" block_addr=%d\n", logic_sect_no, sectors_per_block, offset,
block_address);
if (block_address >= ssfdc->map_len)
BUG();
BUG_ON(block_address >= ssfdc->map_len);
block_address = ssfdc->logic_block_map[block_address];
......
......@@ -290,7 +290,7 @@ static int overwrite_test(void)
while (opno < max_overwrite) {
err = rewrite_page(0);
err = write_page(0);
if (err)
break;
......
......@@ -783,6 +783,7 @@ static inline void nand_set_controller_data(struct nand_chip *chip, void *priv)
* NAND Flash Manufacturer ID Codes
*/
#define NAND_MFR_TOSHIBA 0x98
#define NAND_MFR_ESMT 0xc8
#define NAND_MFR_SAMSUNG 0xec
#define NAND_MFR_FUJITSU 0x04
#define NAND_MFR_NATIONAL 0x8f
......
......@@ -173,10 +173,10 @@ struct spi_nor {
int (*read_reg)(struct spi_nor *nor, u8 opcode, u8 *buf, int len);
int (*write_reg)(struct spi_nor *nor, u8 opcode, u8 *buf, int len);
int (*read)(struct spi_nor *nor, loff_t from,
size_t len, size_t *retlen, u_char *read_buf);
void (*write)(struct spi_nor *nor, loff_t to,
size_t len, size_t *retlen, const u_char *write_buf);
ssize_t (*read)(struct spi_nor *nor, loff_t from,
size_t len, u_char *read_buf);
ssize_t (*write)(struct spi_nor *nor, loff_t to,
size_t len, const u_char *write_buf);
int (*erase)(struct spi_nor *nor, loff_t offs);
int (*flash_lock)(struct spi_nor *nor, loff_t ofs, uint64_t len);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment