Commit d8ea757b authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'xtensa-20161005' of git://github.com/jcmvbkbc/linux-xtensa

Pull Xtensa updates from Max Filippov:
 "Updates for the xtensa architecture.  It is a combined set of patches
  for 4.8 that never got to the mainline and new patches for 4.9.

   - add new kernel memory layouts for MMUv3 cores: with 256MB and 512MB
     KSEG size, starting at physical address other than 0

   - make kernel load address configurable

   - clean up kernel memory layout macros

   - drop sysmem early allocator and switch to memblock

   - enable kmemleak and memory reservation from the device tree

   - wire up new syscalls: userfaultfd, membarrier, mlock2,
     copy_file_range, preadv2 and pwritev2

   - add new platform: Cadence Configurable System Platform (CSP) and
     new core variant for it: xt_lnx

   - rearrange CCOUNT calibration code, make most of it generic

   - improve machine reset code (XTFPGA now reboots reliably with MMUv3
     cores)

   - provide default memmap command line option for configurations
     without device tree support

   - ISS fixes: simdisk is now capable of using highmem pages, panic
     correctly terminates simulator"

* tag 'xtensa-20161005' of git://github.com/jcmvbkbc/linux-xtensa: (24 commits)
  xtensa: disable MMU initialization option on MMUv2 cores
  xtensa: add default memmap and mmio32native options to defconfigs
  xtensa: add default memmap option to common_defconfig
  xtensa: add default memmap option to iss_defconfig
  xtensa: ISS: allow simdisk to use high memory buffers
  xtensa: ISS: define simc_exit and use it instead of inline asm
  xtensa: xtfpga: group platform_* functions together
  xtensa: rearrange CCOUNT calibration
  xtensa: xtfpga: use clock provider, don't update DT
  xtensa: Tweak xuartps UART driver Rx watermark for Cadence CSP config.
  xtensa: initialize MMU before jumping to reset vector
  xtensa: fix icountlevel setting in cpu_reset
  xtensa: extract common CPU reset code into separate function
  xtensa: Added Cadence CSP kernel configuration for Xtensa
  xtensa: fix default kernel load address
  xtensa: wire up new syscalls
  xtensa: support reserved-memory DT node
  xtensa: drop sysmem and switch to memblock
  xtensa: minimize use of PLATFORM_DEFAULT_MEM_{ADDR,SIZE}
  xtensa: cleanup MMU setup and kernel layout macros
  ...
parents 41844e36 a4c6be5a
...@@ -3,15 +3,8 @@ MMUv3 initialization sequence. ...@@ -3,15 +3,8 @@ MMUv3 initialization sequence.
The code in the initialize_mmu macro sets up MMUv3 memory mapping The code in the initialize_mmu macro sets up MMUv3 memory mapping
identically to MMUv2 fixed memory mapping. Depending on identically to MMUv2 fixed memory mapping. Depending on
CONFIG_INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX symbol this code is CONFIG_INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX symbol this code is
located in one of the following address ranges: located in addresses it was linked for (symbol undefined), or not
(symbol defined), so it needs to be position-independent.
0xF0000000..0xFFFFFFFF (will keep same address in MMU v2 layout;
typically ROM)
0x00000000..0x07FFFFFF (system RAM; this code is actually linked
at 0xD0000000..0xD7FFFFFF [cached]
or 0xD8000000..0xDFFFFFFF [uncached];
in any case, initially runs elsewhere
than linked, so have to be careful)
The code has the following assumptions: The code has the following assumptions:
This code fragment is run only on an MMU v3. This code fragment is run only on an MMU v3.
...@@ -28,24 +21,26 @@ TLB setup proceeds along the following steps. ...@@ -28,24 +21,26 @@ TLB setup proceeds along the following steps.
PA = physical address (two upper nibbles of it); PA = physical address (two upper nibbles of it);
pc = physical range that contains this code; pc = physical range that contains this code;
After step 2, we jump to virtual address in 0x40000000..0x5fffffff After step 2, we jump to virtual address in the range 0x40000000..0x5fffffff
that corresponds to next instruction to execute in this code. or 0x00000000..0x1fffffff, depending on whether the kernel was loaded below
After step 4, we jump to intended (linked) address of this code. 0x40000000 or above. That address corresponds to next instruction to execute
in this code. After step 4, we jump to intended (linked) address of this code.
Step 0 Step1 Step 2 Step3 Step 4 Step5 The scheme below assumes that the kernel is loaded below 0x40000000.
============ ===== ============ ===== ============ =====
VA PA PA VA PA PA VA PA PA Step0 Step1 Step2 Step3 Step4 Step5
------ -- -- ------ -- -- ------ -- -- ===== ===== ===== ===== ===== =====
E0..FF -> E0 -> E0 E0..FF -> E0 F0..FF -> F0 -> F0 VA PA PA PA PA VA PA PA
C0..DF -> C0 -> C0 C0..DF -> C0 E0..EF -> F0 -> F0 ------ -- -- -- -- ------ -- --
A0..BF -> A0 -> A0 A0..BF -> A0 D8..DF -> 00 -> 00 E0..FF -> E0 -> E0 -> E0 F0..FF -> F0 -> F0
80..9F -> 80 -> 80 80..9F -> 80 D0..D7 -> 00 -> 00 C0..DF -> C0 -> C0 -> C0 E0..EF -> F0 -> F0
60..7F -> 60 -> 60 60..7F -> 60 A0..BF -> A0 -> A0 -> A0 D8..DF -> 00 -> 00
40..5F -> 40 40..5F -> pc -> pc 40..5F -> pc 80..9F -> 80 -> 80 -> 80 D0..D7 -> 00 -> 00
20..3F -> 20 -> 20 20..3F -> 20 60..7F -> 60 -> 60 -> 60
00..1F -> 00 -> 00 00..1F -> 00 40..5F -> 40 -> pc -> pc 40..5F -> pc
20..3F -> 20 -> 20 -> 20
The default location of IO peripherals is above 0xf0000000. This may change 00..1F -> 00 -> 00 -> 00
The default location of IO peripherals is above 0xf0000000. This may be changed
using a "ranges" property in a device tree simple-bus node. See ePAPR 1.1, §6.5 using a "ranges" property in a device tree simple-bus node. See ePAPR 1.1, §6.5
for details on the syntax and semantic of simple-bus nodes. The following for details on the syntax and semantic of simple-bus nodes. The following
limitations apply: limitations apply:
...@@ -62,3 +57,127 @@ limitations apply: ...@@ -62,3 +57,127 @@ limitations apply:
6. The IO area covers the entire 256MB segment of parent-bus-address; the 6. The IO area covers the entire 256MB segment of parent-bus-address; the
"ranges" triplet length field is ignored "ranges" triplet length field is ignored
MMUv3 address space layouts.
============================
Default MMUv2-compatible layout.
Symbol VADDR Size
+------------------+
| Userspace | 0x00000000 TASK_SIZE
+------------------+ 0x40000000
+------------------+
| Page table | 0x80000000
+------------------+ 0x80400000
+------------------+
| KMAP area | PKMAP_BASE PTRS_PER_PTE *
| | DCACHE_N_COLORS *
| | PAGE_SIZE
| | (4MB * DCACHE_N_COLORS)
+------------------+
| Atomic KMAP area | FIXADDR_START KM_TYPE_NR *
| | NR_CPUS *
| | DCACHE_N_COLORS *
| | PAGE_SIZE
+------------------+ FIXADDR_TOP 0xbffff000
+------------------+
| VMALLOC area | VMALLOC_START 0xc0000000 128MB - 64KB
+------------------+ VMALLOC_END
| Cache aliasing | TLBTEMP_BASE_1 0xc7ff0000 DCACHE_WAY_SIZE
| remap area 1 |
+------------------+
| Cache aliasing | TLBTEMP_BASE_2 DCACHE_WAY_SIZE
| remap area 2 |
+------------------+
+------------------+
| Cached KSEG | XCHAL_KSEG_CACHED_VADDR 0xd0000000 128MB
+------------------+
| Uncached KSEG | XCHAL_KSEG_BYPASS_VADDR 0xd8000000 128MB
+------------------+
| Cached KIO | XCHAL_KIO_CACHED_VADDR 0xe0000000 256MB
+------------------+
| Uncached KIO | XCHAL_KIO_BYPASS_VADDR 0xf0000000 256MB
+------------------+
256MB cached + 256MB uncached layout.
Symbol VADDR Size
+------------------+
| Userspace | 0x00000000 TASK_SIZE
+------------------+ 0x40000000
+------------------+
| Page table | 0x80000000
+------------------+ 0x80400000
+------------------+
| KMAP area | PKMAP_BASE PTRS_PER_PTE *
| | DCACHE_N_COLORS *
| | PAGE_SIZE
| | (4MB * DCACHE_N_COLORS)
+------------------+
| Atomic KMAP area | FIXADDR_START KM_TYPE_NR *
| | NR_CPUS *
| | DCACHE_N_COLORS *
| | PAGE_SIZE
+------------------+ FIXADDR_TOP 0x9ffff000
+------------------+
| VMALLOC area | VMALLOC_START 0xa0000000 128MB - 64KB
+------------------+ VMALLOC_END
| Cache aliasing | TLBTEMP_BASE_1 0xa7ff0000 DCACHE_WAY_SIZE
| remap area 1 |
+------------------+
| Cache aliasing | TLBTEMP_BASE_2 DCACHE_WAY_SIZE
| remap area 2 |
+------------------+
+------------------+
| Cached KSEG | XCHAL_KSEG_CACHED_VADDR 0xb0000000 256MB
+------------------+
| Uncached KSEG | XCHAL_KSEG_BYPASS_VADDR 0xc0000000 256MB
+------------------+
+------------------+
| Cached KIO | XCHAL_KIO_CACHED_VADDR 0xe0000000 256MB
+------------------+
| Uncached KIO | XCHAL_KIO_BYPASS_VADDR 0xf0000000 256MB
+------------------+
512MB cached + 512MB uncached layout.
Symbol VADDR Size
+------------------+
| Userspace | 0x00000000 TASK_SIZE
+------------------+ 0x40000000
+------------------+
| Page table | 0x80000000
+------------------+ 0x80400000
+------------------+
| KMAP area | PKMAP_BASE PTRS_PER_PTE *
| | DCACHE_N_COLORS *
| | PAGE_SIZE
| | (4MB * DCACHE_N_COLORS)
+------------------+
| Atomic KMAP area | FIXADDR_START KM_TYPE_NR *
| | NR_CPUS *
| | DCACHE_N_COLORS *
| | PAGE_SIZE
+------------------+ FIXADDR_TOP 0x8ffff000
+------------------+
| VMALLOC area | VMALLOC_START 0x90000000 128MB - 64KB
+------------------+ VMALLOC_END
| Cache aliasing | TLBTEMP_BASE_1 0x97ff0000 DCACHE_WAY_SIZE
| remap area 1 |
+------------------+
| Cache aliasing | TLBTEMP_BASE_2 DCACHE_WAY_SIZE
| remap area 2 |
+------------------+
+------------------+
| Cached KSEG | XCHAL_KSEG_CACHED_VADDR 0xa0000000 512MB
+------------------+
| Uncached KSEG | XCHAL_KSEG_BYPASS_VADDR 0xc0000000 512MB
+------------------+
| Cached KIO | XCHAL_KIO_CACHED_VADDR 0xe0000000 256MB
+------------------+
| Uncached KIO | XCHAL_KIO_BYPASS_VADDR 0xf0000000 256MB
+------------------+
...@@ -13,16 +13,19 @@ config XTENSA ...@@ -13,16 +13,19 @@ config XTENSA
select GENERIC_IRQ_SHOW select GENERIC_IRQ_SHOW
select GENERIC_PCI_IOMAP select GENERIC_PCI_IOMAP
select GENERIC_SCHED_CLOCK select GENERIC_SCHED_CLOCK
select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_EXIT_THREAD select HAVE_EXIT_THREAD
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_FUTEX_CMPXCHG if !MMU select HAVE_FUTEX_CMPXCHG if !MMU
select HAVE_HW_BREAKPOINT if PERF_EVENTS select HAVE_HW_BREAKPOINT if PERF_EVENTS
select HAVE_IRQ_TIME_ACCOUNTING select HAVE_IRQ_TIME_ACCOUNTING
select HAVE_MEMBLOCK
select HAVE_OPROFILE select HAVE_OPROFILE
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select IRQ_DOMAIN select IRQ_DOMAIN
select MODULES_USE_ELF_RELA select MODULES_USE_ELF_RELA
select NO_BOOTMEM
select PERF_USE_VMALLOC select PERF_USE_VMALLOC
select VIRT_TO_BUS select VIRT_TO_BUS
help help
...@@ -209,7 +212,8 @@ config HOTPLUG_CPU ...@@ -209,7 +212,8 @@ config HOTPLUG_CPU
config INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX config INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX
bool "Initialize Xtensa MMU inside the Linux kernel code" bool "Initialize Xtensa MMU inside the Linux kernel code"
default y depends on !XTENSA_VARIANT_FSF && !XTENSA_VARIANT_DC232B
default y if XTENSA_VARIANT_DC233C || XTENSA_VARIANT_CUSTOM
help help
Earlier version initialized the MMU in the exception vector Earlier version initialized the MMU in the exception vector
before jumping to _startup in head.S and had an advantage that before jumping to _startup in head.S and had an advantage that
...@@ -236,6 +240,71 @@ config INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX ...@@ -236,6 +240,71 @@ config INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX
If in doubt, say Y. If in doubt, say Y.
config KSEG_PADDR
hex "Physical address of the KSEG mapping"
depends on INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX && MMU
default 0x00000000
help
This is the physical address where KSEG is mapped. Please refer to
the chosen KSEG layout help for the required address alignment.
Unpacked kernel image (including vectors) must be located completely
within KSEG.
Physical memory below this address is not available to linux.
If unsure, leave the default value here.
config KERNEL_LOAD_ADDRESS
hex "Kernel load address"
default 0x60003000 if !MMU
default 0x00003000 if MMU && INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX
default 0xd0003000 if MMU && !INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX
help
This is the address where the kernel is loaded.
It is virtual address for MMUv2 configurations and physical address
for all other configurations.
If unsure, leave the default value here.
config VECTORS_OFFSET
hex "Kernel vectors offset"
default 0x00003000
help
This is the offset of the kernel image from the relocatable vectors
base.
If unsure, leave the default value here.
choice
prompt "KSEG layout"
depends on MMU
default XTENSA_KSEG_MMU_V2
config XTENSA_KSEG_MMU_V2
bool "MMUv2: 128MB cached + 128MB uncached"
help
MMUv2 compatible kernel memory map: TLB way 5 maps 128MB starting
at KSEG_PADDR to 0xd0000000 with cache and to 0xd8000000
without cache.
KSEG_PADDR must be aligned to 128MB.
config XTENSA_KSEG_256M
bool "256MB cached + 256MB uncached"
depends on INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX
help
TLB way 6 maps 256MB starting at KSEG_PADDR to 0xb0000000
with cache and to 0xc0000000 without cache.
KSEG_PADDR must be aligned to 256MB.
config XTENSA_KSEG_512M
bool "512MB cached + 512MB uncached"
depends on INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX
help
TLB way 6 maps 512MB starting at KSEG_PADDR to 0xa0000000
with cache and to 0xc0000000 without cache.
KSEG_PADDR must be aligned to 256MB.
endchoice
config HIGHMEM config HIGHMEM
bool "High Memory Support" bool "High Memory Support"
depends on MMU depends on MMU
...@@ -331,7 +400,7 @@ config XTENSA_PLATFORM_XT2000 ...@@ -331,7 +400,7 @@ config XTENSA_PLATFORM_XT2000
config XTENSA_PLATFORM_XTFPGA config XTENSA_PLATFORM_XTFPGA
bool "XTFPGA" bool "XTFPGA"
select ETHOC if ETHERNET select ETHOC if ETHERNET
select PLATFORM_WANT_DEFAULT_MEM select PLATFORM_WANT_DEFAULT_MEM if !MMU
select SERIAL_CONSOLE select SERIAL_CONSOLE
select XTENSA_CALIBRATE_CCOUNT select XTENSA_CALIBRATE_CCOUNT
help help
...@@ -369,6 +438,7 @@ config USE_OF ...@@ -369,6 +438,7 @@ config USE_OF
bool "Flattened Device Tree support" bool "Flattened Device Tree support"
select OF select OF
select OF_EARLY_FLATTREE select OF_EARLY_FLATTREE
select OF_RESERVED_MEM
help help
Include support for flattened device tree machine descriptions. Include support for flattened device tree machine descriptions.
...@@ -439,16 +509,9 @@ config DEFAULT_MEM_START ...@@ -439,16 +509,9 @@ config DEFAULT_MEM_START
default 0x00000000 if MMU default 0x00000000 if MMU
default 0x60000000 if !MMU default 0x60000000 if !MMU
help help
This is a fallback start address of the default memory area, it is This is the base address of the default memory area.
used when no physical memory size is passed through DTB or through Default memory area has platform-specific meaning, it may be used
boot parameter from bootloader. for e.g. early cache initialization.
In noMMU configuration the following parameters are derived from it:
- kernel load address;
- kernel entry point address;
- relocatable vectors base address;
- uBoot load address;
- TASK_SIZE.
If unsure, leave the default value here. If unsure, leave the default value here.
...@@ -457,11 +520,9 @@ config DEFAULT_MEM_SIZE ...@@ -457,11 +520,9 @@ config DEFAULT_MEM_SIZE
depends on PLATFORM_WANT_DEFAULT_MEM depends on PLATFORM_WANT_DEFAULT_MEM
default 0x04000000 default 0x04000000
help help
This is a fallback size of the default memory area, it is used when This is the size of the default memory area.
no physical memory size is passed through DTB or through boot Default memory area has platform-specific meaning, it may be used
parameter from bootloader. for e.g. early cache initialization.
It's also used for TASK_SIZE calculation in noMMU configuration.
If unsure, leave the default value here. If unsure, leave the default value here.
......
...@@ -23,7 +23,7 @@ SECTIONS ...@@ -23,7 +23,7 @@ SECTIONS
*(.ResetVector.text) *(.ResetVector.text)
} }
.image KERNELOFFSET: AT (LOAD_MEMORY_ADDRESS) .image KERNELOFFSET: AT (CONFIG_KERNEL_LOAD_ADDRESS)
{ {
_image_start = .; _image_start = .;
*(image) *(image)
......
...@@ -35,7 +35,12 @@ _ResetVector: ...@@ -35,7 +35,12 @@ _ResetVector:
.align 4 .align 4
RomInitAddr: RomInitAddr:
.word LOAD_MEMORY_ADDRESS #if defined(CONFIG_INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX) && \
XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY
.word CONFIG_KERNEL_LOAD_ADDRESS
#else
.word KERNELOFFSET
#endif
RomBootParam: RomBootParam:
.word _bootparam .word _bootparam
_bootparam: _bootparam:
......
...@@ -4,15 +4,7 @@ ...@@ -4,15 +4,7 @@
# for more details. # for more details.
# #
ifdef CONFIG_MMU UIMAGE_LOADADDR = $(CONFIG_KERNEL_LOAD_ADDRESS)
ifdef CONFIG_INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX
UIMAGE_LOADADDR = 0x00003000
else
UIMAGE_LOADADDR = 0xd0003000
endif
else
UIMAGE_LOADADDR = $(shell printf "0x%x" $$(( ${CONFIG_DEFAULT_MEM_START} + 0x3000 )) )
endif
UIMAGE_COMPRESSION = gzip UIMAGE_COMPRESSION = gzip
$(obj)/../uImage: vmlinux.bin.gz FORCE $(obj)/../uImage: vmlinux.bin.gz FORCE
......
/dts-v1/;
/ {
compatible = "cdns,xtensa-xtfpga";
#address-cells = <1>;
#size-cells = <1>;
interrupt-parent = <&pic>;
chosen {
bootargs = "earlycon=cdns,0xfd000000,115200 console=tty0 console=ttyPS0,115200 root=/dev/ram0 rw earlyprintk xilinx_uartps.rx_trigger_level=32 loglevel=8 nohz=off ignore_loglevel";
};
memory@0 {
device_type = "memory";
reg = <0x00000000 0x40000000>;
};
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu@0 {
compatible = "cdns,xtensa-cpu";
reg = <0>;
};
};
pic: pic {
compatible = "cdns,xtensa-pic";
#interrupt-cells = <2>;
interrupt-controller;
};
clocks {
osc: main-oscillator {
#clock-cells = <0>;
compatible = "fixed-clock";
};
};
soc {
#address-cells = <1>;
#size-cells = <1>;
compatible = "simple-bus";
ranges = <0x00000000 0xf0000000 0x10000000>;
uart0: serial@0d000000 {
compatible = "xlnx,xuartps", "cdns,uart-r1p8";
clocks = <&osc>, <&osc>;
clock-names = "uart_clk", "pclk";
reg = <0x0d000000 0x1000>;
interrupts = <0 1>;
};
};
};
...@@ -19,9 +19,7 @@ cpus { ...@@ -19,9 +19,7 @@ cpus {
cpu@0 { cpu@0 {
compatible = "cdns,xtensa-cpu"; compatible = "cdns,xtensa-cpu";
reg = <0>; reg = <0>;
/* Filled in by platform_setup from FPGA register clocks = <&osc>;
* clock-frequency = <100000000>;
*/
}; };
}; };
...@@ -36,11 +34,6 @@ pic: pic { ...@@ -36,11 +34,6 @@ pic: pic {
}; };
clocks { clocks {
osc: main-oscillator {
#clock-cells = <0>;
compatible = "fixed-clock";
};
clk54: clk54 { clk54: clk54 {
#clock-cells = <0>; #clock-cells = <0>;
compatible = "fixed-clock"; compatible = "fixed-clock";
...@@ -54,6 +47,12 @@ soc { ...@@ -54,6 +47,12 @@ soc {
compatible = "simple-bus"; compatible = "simple-bus";
ranges = <0x00000000 0xf0000000 0x10000000>; ranges = <0x00000000 0xf0000000 0x10000000>;
osc: main-oscillator {
#clock-cells = <0>;
compatible = "cdns,xtfpga-clock";
reg = <0x0d020004 0x4>;
};
serial0: serial@0d050020 { serial0: serial@0d050020 {
device_type = "serial"; device_type = "serial";
compatible = "ns16550a"; compatible = "ns16550a";
......
...@@ -33,7 +33,7 @@ CONFIG_HIGHMEM=y ...@@ -33,7 +33,7 @@ CONFIG_HIGHMEM=y
# CONFIG_PCI is not set # CONFIG_PCI is not set
CONFIG_XTENSA_PLATFORM_XTFPGA=y CONFIG_XTENSA_PLATFORM_XTFPGA=y
CONFIG_CMDLINE_BOOL=y CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE="earlycon=uart8250,mmio32,0xfd050020,115200n8 console=ttyS0,115200n8 ip=dhcp root=/dev/nfs rw debug" CONFIG_CMDLINE="earlycon=uart8250,mmio32native,0xfd050020,115200n8 console=ttyS0,115200n8 ip=dhcp root=/dev/nfs rw debug memmap=0x38000000@0"
CONFIG_USE_OF=y CONFIG_USE_OF=y
CONFIG_BUILTIN_DTB="kc705" CONFIG_BUILTIN_DTB="kc705"
# CONFIG_COMPACTION is not set # CONFIG_COMPACTION is not set
......
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_USELIB=y
CONFIG_IRQ_DOMAIN_DEBUG=y
CONFIG_NO_HZ_IDLE=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_IRQ_TIME_ACCOUNTING=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_DEBUG=y
CONFIG_NAMESPACES=y
CONFIG_SCHED_AUTOGROUP=y
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE="$$KERNEL_INITRAMFS_SOURCE"
# CONFIG_RD_BZIP2 is not set
# CONFIG_RD_LZMA is not set
# CONFIG_RD_XZ is not set
# CONFIG_RD_LZO is not set
# CONFIG_RD_LZ4 is not set
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_SYSCTL_SYSCALL=y
CONFIG_EMBEDDED=y
CONFIG_PROFILING=y
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set
CONFIG_XTENSA_VARIANT_CUSTOM=y
CONFIG_XTENSA_VARIANT_CUSTOM_NAME="csp"
CONFIG_XTENSA_UNALIGNED_USER=y
CONFIG_PREEMPT=y
CONFIG_HIGHMEM=y
# CONFIG_PCI is not set
CONFIG_XTENSA_PLATFORM_XTFPGA=y
CONFIG_USE_OF=y
CONFIG_BUILTIN_DTB="csp"
# CONFIG_COMPACTION is not set
CONFIG_XTFPGA_LCD=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_DIAG is not set
# CONFIG_IPV6 is not set
# CONFIG_WIRELESS is not set
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_STANDALONE is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
CONFIG_MTD_CFI=y
CONFIG_MTD_JEDECPROBE=y
CONFIG_MTD_CFI_INTELEXT=y
CONFIG_MTD_CFI_AMDSTD=y
CONFIG_MTD_CFI_STAA=y
CONFIG_MTD_PHYSMAP_OF=y
CONFIG_MTD_UBI=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
# CONFIG_INPUT_MOUSEDEV is not set
# CONFIG_INPUT_KEYBOARD is not set
# CONFIG_INPUT_MOUSE is not set
CONFIG_LEGACY_PTY_COUNT=16
CONFIG_SERIAL_XILINX_PS_UART=y
CONFIG_SERIAL_XILINX_PS_UART_CONSOLE=y
CONFIG_HW_RANDOM=y
# CONFIG_HWMON is not set
CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_NOWAYOUT=y
CONFIG_SOFT_WATCHDOG=y
# CONFIG_VGA_CONSOLE is not set
# CONFIG_USB_SUPPORT is not set
# CONFIG_IOMMU_SUPPORT is not set
CONFIG_EXT3_FS=y
CONFIG_FANOTIFY=y
CONFIG_VFAT_FS=y
CONFIG_PROC_KCORE=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_NFS_FS=y
CONFIG_NFS_V4=y
CONFIG_NFS_SWAP=y
CONFIG_ROOT_NFS=y
CONFIG_SUNRPC_DEBUG=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PRINTK_TIME=y
CONFIG_DYNAMIC_DEBUG=y
CONFIG_DEBUG_INFO=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_LOCKUP_DETECTOR=y
# CONFIG_SCHED_DEBUG is not set
CONFIG_SCHEDSTATS=y
CONFIG_TIMER_STATS=y
CONFIG_DEBUG_RT_MUTEXES=y
CONFIG_PROVE_LOCKING=y
CONFIG_DEBUG_ATOMIC_SLEEP=y
CONFIG_RCU_TRACE=y
CONFIG_FUNCTION_TRACER=y
# CONFIG_S32C1I_SELFTEST is not set
# CONFIG_CRYPTO_ECHAINIV is not set
CONFIG_CRYPTO_DEFLATE=y
CONFIG_CRYPTO_LZO=y
# CONFIG_CRYPTO_HW is not set
This diff is collapsed.
...@@ -32,7 +32,7 @@ CONFIG_HIGHMEM=y ...@@ -32,7 +32,7 @@ CONFIG_HIGHMEM=y
# CONFIG_PCI is not set # CONFIG_PCI is not set
CONFIG_XTENSA_PLATFORM_XTFPGA=y CONFIG_XTENSA_PLATFORM_XTFPGA=y
CONFIG_CMDLINE_BOOL=y CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE="earlycon=uart8250,mmio32,0xfd050020,115200n8 console=ttyS0,115200n8 ip=dhcp root=/dev/nfs rw debug" CONFIG_CMDLINE="earlycon=uart8250,mmio32native,0xfd050020,115200n8 console=ttyS0,115200n8 ip=dhcp root=/dev/nfs rw debug memmap=0x38000000@0"
CONFIG_USE_OF=y CONFIG_USE_OF=y
CONFIG_BUILTIN_DTB="kc705" CONFIG_BUILTIN_DTB="kc705"
# CONFIG_COMPACTION is not set # CONFIG_COMPACTION is not set
......
This diff is collapsed.
...@@ -37,7 +37,7 @@ CONFIG_PREEMPT=y ...@@ -37,7 +37,7 @@ CONFIG_PREEMPT=y
# CONFIG_PCI is not set # CONFIG_PCI is not set
CONFIG_XTENSA_PLATFORM_XTFPGA=y CONFIG_XTENSA_PLATFORM_XTFPGA=y
CONFIG_CMDLINE_BOOL=y CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE="earlycon=uart8250,mmio32,0x9d050020,115200n8 console=ttyS0,115200n8 ip=dhcp root=/dev/nfs rw debug" CONFIG_CMDLINE="earlycon=uart8250,mmio32native,0x9d050020,115200n8 console=ttyS0,115200n8 ip=dhcp root=/dev/nfs rw debug memmap=256M@0x60000000"
CONFIG_USE_OF=y CONFIG_USE_OF=y
CONFIG_BUILTIN_DTB="kc705_nommu" CONFIG_BUILTIN_DTB="kc705_nommu"
CONFIG_DEFAULT_MEM_SIZE=0x10000000 CONFIG_DEFAULT_MEM_SIZE=0x10000000
......
...@@ -36,7 +36,7 @@ CONFIG_HOTPLUG_CPU=y ...@@ -36,7 +36,7 @@ CONFIG_HOTPLUG_CPU=y
# CONFIG_PCI is not set # CONFIG_PCI is not set
CONFIG_XTENSA_PLATFORM_XTFPGA=y CONFIG_XTENSA_PLATFORM_XTFPGA=y
CONFIG_CMDLINE_BOOL=y CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE="earlycon=uart8250,mmio32,0xfd050020,115200n8 console=ttyS0,115200n8 ip=dhcp root=/dev/nfs rw debug" CONFIG_CMDLINE="earlycon=uart8250,mmio32native,0xfd050020,115200n8 console=ttyS0,115200n8 ip=dhcp root=/dev/nfs rw debug memmap=96M@0"
CONFIG_USE_OF=y CONFIG_USE_OF=y
CONFIG_BUILTIN_DTB="lx200mx" CONFIG_BUILTIN_DTB="lx200mx"
# CONFIG_COMPACTION is not set # CONFIG_COMPACTION is not set
......
...@@ -48,7 +48,7 @@ static inline int ffz(unsigned long x) ...@@ -48,7 +48,7 @@ static inline int ffz(unsigned long x)
* __ffs: Find first bit set in word. Return 0 for bit 0 * __ffs: Find first bit set in word. Return 0 for bit 0
*/ */
static inline int __ffs(unsigned long x) static inline unsigned long __ffs(unsigned long x)
{ {
return 31 - __cntlz(x & -x); return 31 - __cntlz(x & -x);
} }
......
...@@ -69,26 +69,23 @@ ...@@ -69,26 +69,23 @@
.endm .endm
#if XCHAL_DCACHE_LINE_LOCKABLE
.macro ___unlock_dcache_all ar at .macro ___unlock_dcache_all ar at
#if XCHAL_DCACHE_SIZE #if XCHAL_DCACHE_LINE_LOCKABLE && XCHAL_DCACHE_SIZE
__loop_cache_all \ar \at diu XCHAL_DCACHE_SIZE XCHAL_DCACHE_LINEWIDTH __loop_cache_all \ar \at diu XCHAL_DCACHE_SIZE XCHAL_DCACHE_LINEWIDTH
#endif #endif
.endm .endm
#endif
#if XCHAL_ICACHE_LINE_LOCKABLE
.macro ___unlock_icache_all ar at .macro ___unlock_icache_all ar at
#if XCHAL_ICACHE_LINE_LOCKABLE && XCHAL_ICACHE_SIZE
__loop_cache_all \ar \at iiu XCHAL_ICACHE_SIZE XCHAL_ICACHE_LINEWIDTH __loop_cache_all \ar \at iiu XCHAL_ICACHE_SIZE XCHAL_ICACHE_LINEWIDTH
#endif
.endm .endm
#endif
.macro ___flush_invalidate_dcache_all ar at .macro ___flush_invalidate_dcache_all ar at
......
...@@ -59,6 +59,11 @@ enum fixed_addresses { ...@@ -59,6 +59,11 @@ enum fixed_addresses {
*/ */
static __always_inline unsigned long fix_to_virt(const unsigned int idx) static __always_inline unsigned long fix_to_virt(const unsigned int idx)
{ {
/* Check if this memory layout is broken because fixmap overlaps page
* table.
*/
BUILD_BUG_ON(FIXADDR_START <
XCHAL_PAGE_TABLE_VADDR + XCHAL_PAGE_TABLE_SIZE);
BUILD_BUG_ON(idx >= __end_of_fixed_addresses); BUILD_BUG_ON(idx >= __end_of_fixed_addresses);
return __fix_to_virt(idx); return __fix_to_virt(idx);
} }
......
...@@ -68,6 +68,11 @@ void kunmap_high(struct page *page); ...@@ -68,6 +68,11 @@ void kunmap_high(struct page *page);
static inline void *kmap(struct page *page) static inline void *kmap(struct page *page)
{ {
/* Check if this memory layout is broken because PKMAP overlaps
* page table.
*/
BUILD_BUG_ON(PKMAP_BASE <
XCHAL_PAGE_TABLE_VADDR + XCHAL_PAGE_TABLE_SIZE);
BUG_ON(in_interrupt()); BUG_ON(in_interrupt());
if (!PageHighMem(page)) if (!PageHighMem(page))
return page_address(page); return page_address(page);
......
...@@ -77,13 +77,16 @@ ...@@ -77,13 +77,16 @@
.align 4 .align 4
1: movi a2, 0x10000000 1: movi a2, 0x10000000
movi a3, 0x18000000
add a2, a2, a0 #if CONFIG_KERNEL_LOAD_ADDRESS < 0x40000000ul
9: bgeu a2, a3, 9b /* PC is out of the expected range */ #define TEMP_MAPPING_VADDR 0x40000000
#else
#define TEMP_MAPPING_VADDR 0x00000000
#endif
/* Step 1: invalidate mapping at 0x40000000..0x5FFFFFFF. */ /* Step 1: invalidate mapping at 0x40000000..0x5FFFFFFF. */
movi a2, 0x40000000 | XCHAL_SPANNING_WAY movi a2, TEMP_MAPPING_VADDR | XCHAL_SPANNING_WAY
idtlb a2 idtlb a2
iitlb a2 iitlb a2
isync isync
...@@ -95,14 +98,14 @@ ...@@ -95,14 +98,14 @@
srli a3, a0, 27 srli a3, a0, 27
slli a3, a3, 27 slli a3, a3, 27
addi a3, a3, CA_BYPASS addi a3, a3, CA_BYPASS
addi a7, a2, -1 addi a7, a2, 5 - XCHAL_SPANNING_WAY
wdtlb a3, a7 wdtlb a3, a7
witlb a3, a7 witlb a3, a7
isync isync
slli a4, a0, 5 slli a4, a0, 5
srli a4, a4, 5 srli a4, a4, 5
addi a5, a2, -6 addi a5, a2, -XCHAL_SPANNING_WAY
add a4, a4, a5 add a4, a4, a5
jx a4 jx a4
...@@ -116,35 +119,48 @@ ...@@ -116,35 +119,48 @@
add a5, a5, a4 add a5, a5, a4
bne a5, a2, 3b bne a5, a2, 3b
/* Step 4: Setup MMU with the old V2 mappings. */ /* Step 4: Setup MMU with the requested static mappings. */
movi a6, 0x01000000 movi a6, 0x01000000
wsr a6, ITLBCFG wsr a6, ITLBCFG
wsr a6, DTLBCFG wsr a6, DTLBCFG
isync isync
movi a5, 0xd0000005 movi a5, XCHAL_KSEG_CACHED_VADDR + XCHAL_KSEG_TLB_WAY
movi a4, CA_WRITEBACK movi a4, XCHAL_KSEG_PADDR + CA_WRITEBACK
wdtlb a4, a5 wdtlb a4, a5
witlb a4, a5 witlb a4, a5
movi a5, 0xd8000005 movi a5, XCHAL_KSEG_BYPASS_VADDR + XCHAL_KSEG_TLB_WAY
movi a4, CA_BYPASS movi a4, XCHAL_KSEG_PADDR + CA_BYPASS
wdtlb a4, a5 wdtlb a4, a5
witlb a4, a5 witlb a4, a5
movi a5, XCHAL_KIO_CACHED_VADDR + 6 #ifdef CONFIG_XTENSA_KSEG_512M
movi a5, XCHAL_KSEG_CACHED_VADDR + 0x10000000 + XCHAL_KSEG_TLB_WAY
movi a4, XCHAL_KSEG_PADDR + 0x10000000 + CA_WRITEBACK
wdtlb a4, a5
witlb a4, a5
movi a5, XCHAL_KSEG_BYPASS_VADDR + 0x10000000 + XCHAL_KSEG_TLB_WAY
movi a4, XCHAL_KSEG_PADDR + 0x10000000 + CA_BYPASS
wdtlb a4, a5
witlb a4, a5
#endif
movi a5, XCHAL_KIO_CACHED_VADDR + XCHAL_KIO_TLB_WAY
movi a4, XCHAL_KIO_DEFAULT_PADDR + CA_WRITEBACK movi a4, XCHAL_KIO_DEFAULT_PADDR + CA_WRITEBACK
wdtlb a4, a5 wdtlb a4, a5
witlb a4, a5 witlb a4, a5
movi a5, XCHAL_KIO_BYPASS_VADDR + 6 movi a5, XCHAL_KIO_BYPASS_VADDR + XCHAL_KIO_TLB_WAY
movi a4, XCHAL_KIO_DEFAULT_PADDR + CA_BYPASS movi a4, XCHAL_KIO_DEFAULT_PADDR + CA_BYPASS
wdtlb a4, a5 wdtlb a4, a5
witlb a4, a5 witlb a4, a5
isync isync
/* Jump to self, using MMU v2 mappings. */ /* Jump to self, using final mappings. */
movi a4, 1f movi a4, 1f
jx a4 jx a4
......
/*
* Kernel virtual memory layout definitions.
*
* This file is subject to the terms and conditions of the GNU General
* Public License. See the file "COPYING" in the main directory of
* this archive for more details.
*
* Copyright (C) 2016 Cadence Design Systems Inc.
*/
#ifndef _XTENSA_KMEM_LAYOUT_H
#define _XTENSA_KMEM_LAYOUT_H
#include <asm/types.h>
#ifdef CONFIG_MMU
/*
* Fixed TLB translations in the processor.
*/
#define XCHAL_PAGE_TABLE_VADDR __XTENSA_UL_CONST(0x80000000)
#define XCHAL_PAGE_TABLE_SIZE __XTENSA_UL_CONST(0x00400000)
#if defined(CONFIG_XTENSA_KSEG_MMU_V2)
#define XCHAL_KSEG_CACHED_VADDR __XTENSA_UL_CONST(0xd0000000)
#define XCHAL_KSEG_BYPASS_VADDR __XTENSA_UL_CONST(0xd8000000)
#define XCHAL_KSEG_SIZE __XTENSA_UL_CONST(0x08000000)
#define XCHAL_KSEG_ALIGNMENT __XTENSA_UL_CONST(0x08000000)
#define XCHAL_KSEG_TLB_WAY 5
#define XCHAL_KIO_TLB_WAY 6
#elif defined(CONFIG_XTENSA_KSEG_256M)
#define XCHAL_KSEG_CACHED_VADDR __XTENSA_UL_CONST(0xb0000000)
#define XCHAL_KSEG_BYPASS_VADDR __XTENSA_UL_CONST(0xc0000000)
#define XCHAL_KSEG_SIZE __XTENSA_UL_CONST(0x10000000)
#define XCHAL_KSEG_ALIGNMENT __XTENSA_UL_CONST(0x10000000)
#define XCHAL_KSEG_TLB_WAY 6
#define XCHAL_KIO_TLB_WAY 6
#elif defined(CONFIG_XTENSA_KSEG_512M)
#define XCHAL_KSEG_CACHED_VADDR __XTENSA_UL_CONST(0xa0000000)
#define XCHAL_KSEG_BYPASS_VADDR __XTENSA_UL_CONST(0xc0000000)
#define XCHAL_KSEG_SIZE __XTENSA_UL_CONST(0x20000000)
#define XCHAL_KSEG_ALIGNMENT __XTENSA_UL_CONST(0x10000000)
#define XCHAL_KSEG_TLB_WAY 6
#define XCHAL_KIO_TLB_WAY 6
#else
#error Unsupported KSEG configuration
#endif
#ifdef CONFIG_KSEG_PADDR
#define XCHAL_KSEG_PADDR __XTENSA_UL_CONST(CONFIG_KSEG_PADDR)
#else
#define XCHAL_KSEG_PADDR __XTENSA_UL_CONST(0x00000000)
#endif
#if XCHAL_KSEG_PADDR & (XCHAL_KSEG_ALIGNMENT - 1)
#error XCHAL_KSEG_PADDR is not properly aligned to XCHAL_KSEG_ALIGNMENT
#endif
#else
#define XCHAL_KSEG_CACHED_VADDR __XTENSA_UL_CONST(0xd0000000)
#define XCHAL_KSEG_BYPASS_VADDR __XTENSA_UL_CONST(0xd8000000)
#define XCHAL_KSEG_SIZE __XTENSA_UL_CONST(0x08000000)
#endif
#endif
...@@ -15,15 +15,7 @@ ...@@ -15,15 +15,7 @@
#include <asm/types.h> #include <asm/types.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <platform/hardware.h> #include <platform/hardware.h>
#include <asm/kmem_layout.h>
/*
* Fixed TLB translations in the processor.
*/
#define XCHAL_KSEG_CACHED_VADDR __XTENSA_UL_CONST(0xd0000000)
#define XCHAL_KSEG_BYPASS_VADDR __XTENSA_UL_CONST(0xd8000000)
#define XCHAL_KSEG_PADDR __XTENSA_UL_CONST(0x00000000)
#define XCHAL_KSEG_SIZE __XTENSA_UL_CONST(0x08000000)
/* /*
* PAGE_SHIFT determines the page size * PAGE_SHIFT determines the page size
...@@ -35,10 +27,13 @@ ...@@ -35,10 +27,13 @@
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
#define PAGE_OFFSET XCHAL_KSEG_CACHED_VADDR #define PAGE_OFFSET XCHAL_KSEG_CACHED_VADDR
#define MAX_MEM_PFN XCHAL_KSEG_SIZE #define PHYS_OFFSET XCHAL_KSEG_PADDR
#define MAX_LOW_PFN (PHYS_PFN(XCHAL_KSEG_PADDR) + \
PHYS_PFN(XCHAL_KSEG_SIZE))
#else #else
#define PAGE_OFFSET __XTENSA_UL_CONST(0) #define PAGE_OFFSET PLATFORM_DEFAULT_MEM_START
#define MAX_MEM_PFN (PLATFORM_DEFAULT_MEM_START + PLATFORM_DEFAULT_MEM_SIZE) #define PHYS_OFFSET PLATFORM_DEFAULT_MEM_START
#define MAX_LOW_PFN PHYS_PFN(0xfffffffful)
#endif #endif
#define PGTABLE_START 0x80000000 #define PGTABLE_START 0x80000000
...@@ -167,10 +162,12 @@ void copy_user_highpage(struct page *to, struct page *from, ...@@ -167,10 +162,12 @@ void copy_user_highpage(struct page *to, struct page *from,
* addresses. * addresses.
*/ */
#define ARCH_PFN_OFFSET (PLATFORM_DEFAULT_MEM_START >> PAGE_SHIFT) #define ARCH_PFN_OFFSET (PHYS_OFFSET >> PAGE_SHIFT)
#define __pa(x) ((unsigned long) (x) - PAGE_OFFSET) #define __pa(x) \
#define __va(x) ((void *)((unsigned long) (x) + PAGE_OFFSET)) ((unsigned long) (x) - PAGE_OFFSET + PHYS_OFFSET)
#define __va(x) \
((void *)((unsigned long) (x) - PHYS_OFFSET + PAGE_OFFSET))
#define pfn_valid(pfn) \ #define pfn_valid(pfn) \
((pfn) >= ARCH_PFN_OFFSET && ((pfn) - ARCH_PFN_OFFSET) < max_mapnr) ((pfn) >= ARCH_PFN_OFFSET && ((pfn) - ARCH_PFN_OFFSET) < max_mapnr)
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#include <asm-generic/pgtable-nopmd.h> #include <asm-generic/pgtable-nopmd.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/kmem_layout.h>
/* /*
* We only use two ring levels, user and kernel space. * We only use two ring levels, user and kernel space.
...@@ -68,9 +69,9 @@ ...@@ -68,9 +69,9 @@
* Virtual memory area. We keep a distance to other memory regions to be * Virtual memory area. We keep a distance to other memory regions to be
* on the safe side. We also use this area for cache aliasing. * on the safe side. We also use this area for cache aliasing.
*/ */
#define VMALLOC_START 0xC0000000 #define VMALLOC_START (XCHAL_KSEG_CACHED_VADDR - 0x10000000)
#define VMALLOC_END 0xC7FEFFFF #define VMALLOC_END (VMALLOC_START + 0x07FEFFFF)
#define TLBTEMP_BASE_1 0xC7FF0000 #define TLBTEMP_BASE_1 (VMALLOC_END + 1)
#define TLBTEMP_BASE_2 (TLBTEMP_BASE_1 + DCACHE_WAY_SIZE) #define TLBTEMP_BASE_2 (TLBTEMP_BASE_1 + DCACHE_WAY_SIZE)
#if 2 * DCACHE_WAY_SIZE > ICACHE_WAY_SIZE #if 2 * DCACHE_WAY_SIZE > ICACHE_WAY_SIZE
#define TLBTEMP_SIZE (2 * DCACHE_WAY_SIZE) #define TLBTEMP_SIZE (2 * DCACHE_WAY_SIZE)
......
...@@ -69,4 +69,10 @@ extern int platform_pcibios_fixup (void); ...@@ -69,4 +69,10 @@ extern int platform_pcibios_fixup (void);
*/ */
extern void platform_calibrate_ccount (void); extern void platform_calibrate_ccount (void);
/*
* Flush and reset the mmu, simulate a processor reset, and
* jump to the reset vector.
*/
void cpu_reset(void) __attribute__((noreturn));
#endif /* _XTENSA_PLATFORM_H */ #endif /* _XTENSA_PLATFORM_H */
...@@ -37,7 +37,7 @@ ...@@ -37,7 +37,7 @@
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
#define TASK_SIZE __XTENSA_UL_CONST(0x40000000) #define TASK_SIZE __XTENSA_UL_CONST(0x40000000)
#else #else
#define TASK_SIZE (PLATFORM_DEFAULT_MEM_START + PLATFORM_DEFAULT_MEM_SIZE) #define TASK_SIZE __XTENSA_UL_CONST(0xffffffff)
#endif #endif
#define STACK_TOP TASK_SIZE #define STACK_TOP TASK_SIZE
......
...@@ -11,27 +11,8 @@ ...@@ -11,27 +11,8 @@
#ifndef _XTENSA_SYSMEM_H #ifndef _XTENSA_SYSMEM_H
#define _XTENSA_SYSMEM_H #define _XTENSA_SYSMEM_H
#define SYSMEM_BANKS_MAX 31 #include <linux/memblock.h>
struct meminfo {
unsigned long start;
unsigned long end;
};
/*
* Bank array is sorted by .start.
* Banks don't overlap and there's at least one page gap
* between adjacent bank entries.
*/
struct sysmem_info {
int nr_banks;
struct meminfo bank[SYSMEM_BANKS_MAX];
};
extern struct sysmem_info sysmem;
int add_sysmem_bank(unsigned long start, unsigned long end);
int mem_reserve(unsigned long, unsigned long, int);
void bootmem_init(void); void bootmem_init(void);
void zones_init(void); void zones_init(void);
......
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include <variant/core.h> #include <variant/core.h>
#include <platform/hardware.h> #include <platform/hardware.h>
#include <asm/kmem_layout.h>
#if XCHAL_HAVE_PTP_MMU #if XCHAL_HAVE_PTP_MMU
#define XCHAL_KIO_CACHED_VADDR 0xe0000000 #define XCHAL_KIO_CACHED_VADDR 0xe0000000
...@@ -47,61 +48,42 @@ static inline unsigned long xtensa_get_kio_paddr(void) ...@@ -47,61 +48,42 @@ static inline unsigned long xtensa_get_kio_paddr(void)
#if defined(CONFIG_MMU) #if defined(CONFIG_MMU)
/* Will Become VECBASE */ #if XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY
#define VIRTUAL_MEMORY_ADDRESS 0xD0000000
/* Image Virtual Start Address */ /* Image Virtual Start Address */
#define KERNELOFFSET 0xD0003000 #define KERNELOFFSET (XCHAL_KSEG_CACHED_VADDR + \
CONFIG_KERNEL_LOAD_ADDRESS - \
#if defined(XCHAL_HAVE_PTP_MMU) && XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY XCHAL_KSEG_PADDR)
/* MMU v3 - XCHAL_HAVE_PTP_MMU == 1 */
#define LOAD_MEMORY_ADDRESS 0x00003000
#else #else
/* MMU V2 - XCHAL_HAVE_PTP_MMU == 0 */ #define KERNELOFFSET CONFIG_KERNEL_LOAD_ADDRESS
#define LOAD_MEMORY_ADDRESS 0xD0003000
#endif #endif
#define RESET_VECTOR1_VADDR (VIRTUAL_MEMORY_ADDRESS + \
XCHAL_RESET_VECTOR1_PADDR)
#else /* !defined(CONFIG_MMU) */ #else /* !defined(CONFIG_MMU) */
/* MMU Not being used - Virtual == Physical */ /* MMU Not being used - Virtual == Physical */
/* VECBASE */ /* Location of the start of the kernel text, _start */
#define VIRTUAL_MEMORY_ADDRESS (PLATFORM_DEFAULT_MEM_START + 0x2000) #define KERNELOFFSET CONFIG_KERNEL_LOAD_ADDRESS
/* Location of the start of the kernel text, _start */
#define KERNELOFFSET (PLATFORM_DEFAULT_MEM_START + 0x3000)
/* Loaded just above possibly live vectors */
#define LOAD_MEMORY_ADDRESS (PLATFORM_DEFAULT_MEM_START + 0x3000)
#define RESET_VECTOR1_VADDR (XCHAL_RESET_VECTOR1_VADDR)
#endif /* CONFIG_MMU */ #endif /* CONFIG_MMU */
#define XC_VADDR(offset) (VIRTUAL_MEMORY_ADDRESS + offset) #define RESET_VECTOR1_VADDR (XCHAL_RESET_VECTOR1_VADDR)
#define VECBASE_VADDR (KERNELOFFSET - CONFIG_VECTORS_OFFSET)
/* Used to set VECBASE register */
#define VECBASE_RESET_VADDR VIRTUAL_MEMORY_ADDRESS
#if defined(XCHAL_HAVE_VECBASE) && XCHAL_HAVE_VECBASE #if defined(XCHAL_HAVE_VECBASE) && XCHAL_HAVE_VECBASE
#define USER_VECTOR_VADDR XC_VADDR(XCHAL_USER_VECOFS) #define VECTOR_VADDR(offset) (VECBASE_VADDR + offset)
#define KERNEL_VECTOR_VADDR XC_VADDR(XCHAL_KERNEL_VECOFS)
#define DOUBLEEXC_VECTOR_VADDR XC_VADDR(XCHAL_DOUBLEEXC_VECOFS)
#define WINDOW_VECTORS_VADDR XC_VADDR(XCHAL_WINDOW_OF4_VECOFS)
#define INTLEVEL2_VECTOR_VADDR XC_VADDR(XCHAL_INTLEVEL2_VECOFS)
#define INTLEVEL3_VECTOR_VADDR XC_VADDR(XCHAL_INTLEVEL3_VECOFS)
#define INTLEVEL4_VECTOR_VADDR XC_VADDR(XCHAL_INTLEVEL4_VECOFS)
#define INTLEVEL5_VECTOR_VADDR XC_VADDR(XCHAL_INTLEVEL5_VECOFS)
#define INTLEVEL6_VECTOR_VADDR XC_VADDR(XCHAL_INTLEVEL6_VECOFS)
#define DEBUG_VECTOR_VADDR XC_VADDR(XCHAL_DEBUG_VECOFS)
#define NMI_VECTOR_VADDR XC_VADDR(XCHAL_NMI_VECOFS) #define USER_VECTOR_VADDR VECTOR_VADDR(XCHAL_USER_VECOFS)
#define KERNEL_VECTOR_VADDR VECTOR_VADDR(XCHAL_KERNEL_VECOFS)
#define INTLEVEL7_VECTOR_VADDR XC_VADDR(XCHAL_INTLEVEL7_VECOFS) #define DOUBLEEXC_VECTOR_VADDR VECTOR_VADDR(XCHAL_DOUBLEEXC_VECOFS)
#define WINDOW_VECTORS_VADDR VECTOR_VADDR(XCHAL_WINDOW_OF4_VECOFS)
#define INTLEVEL2_VECTOR_VADDR VECTOR_VADDR(XCHAL_INTLEVEL2_VECOFS)
#define INTLEVEL3_VECTOR_VADDR VECTOR_VADDR(XCHAL_INTLEVEL3_VECOFS)
#define INTLEVEL4_VECTOR_VADDR VECTOR_VADDR(XCHAL_INTLEVEL4_VECOFS)
#define INTLEVEL5_VECTOR_VADDR VECTOR_VADDR(XCHAL_INTLEVEL5_VECOFS)
#define INTLEVEL6_VECTOR_VADDR VECTOR_VADDR(XCHAL_INTLEVEL6_VECOFS)
#define INTLEVEL7_VECTOR_VADDR VECTOR_VADDR(XCHAL_INTLEVEL7_VECOFS)
#define DEBUG_VECTOR_VADDR VECTOR_VADDR(XCHAL_DEBUG_VECOFS)
/* /*
* These XCHAL_* #defines from varian/core.h * These XCHAL_* #defines from varian/core.h
...@@ -109,7 +91,6 @@ static inline unsigned long xtensa_get_kio_paddr(void) ...@@ -109,7 +91,6 @@ static inline unsigned long xtensa_get_kio_paddr(void)
* constants are defined above and should be used. * constants are defined above and should be used.
*/ */
#undef XCHAL_VECBASE_RESET_VADDR #undef XCHAL_VECBASE_RESET_VADDR
#undef XCHAL_RESET_VECTOR0_VADDR
#undef XCHAL_USER_VECTOR_VADDR #undef XCHAL_USER_VECTOR_VADDR
#undef XCHAL_KERNEL_VECTOR_VADDR #undef XCHAL_KERNEL_VECTOR_VADDR
#undef XCHAL_DOUBLEEXC_VECTOR_VADDR #undef XCHAL_DOUBLEEXC_VECTOR_VADDR
...@@ -119,9 +100,8 @@ static inline unsigned long xtensa_get_kio_paddr(void) ...@@ -119,9 +100,8 @@ static inline unsigned long xtensa_get_kio_paddr(void)
#undef XCHAL_INTLEVEL4_VECTOR_VADDR #undef XCHAL_INTLEVEL4_VECTOR_VADDR
#undef XCHAL_INTLEVEL5_VECTOR_VADDR #undef XCHAL_INTLEVEL5_VECTOR_VADDR
#undef XCHAL_INTLEVEL6_VECTOR_VADDR #undef XCHAL_INTLEVEL6_VECTOR_VADDR
#undef XCHAL_DEBUG_VECTOR_VADDR
#undef XCHAL_NMI_VECTOR_VADDR
#undef XCHAL_INTLEVEL7_VECTOR_VADDR #undef XCHAL_INTLEVEL7_VECTOR_VADDR
#undef XCHAL_DEBUG_VECTOR_VADDR
#else #else
...@@ -134,6 +114,7 @@ static inline unsigned long xtensa_get_kio_paddr(void) ...@@ -134,6 +114,7 @@ static inline unsigned long xtensa_get_kio_paddr(void)
#define INTLEVEL4_VECTOR_VADDR XCHAL_INTLEVEL4_VECTOR_VADDR #define INTLEVEL4_VECTOR_VADDR XCHAL_INTLEVEL4_VECTOR_VADDR
#define INTLEVEL5_VECTOR_VADDR XCHAL_INTLEVEL5_VECTOR_VADDR #define INTLEVEL5_VECTOR_VADDR XCHAL_INTLEVEL5_VECTOR_VADDR
#define INTLEVEL6_VECTOR_VADDR XCHAL_INTLEVEL6_VECTOR_VADDR #define INTLEVEL6_VECTOR_VADDR XCHAL_INTLEVEL6_VECTOR_VADDR
#define INTLEVEL7_VECTOR_VADDR XCHAL_INTLEVEL6_VECTOR_VADDR
#define DEBUG_VECTOR_VADDR XCHAL_DEBUG_VECTOR_VADDR #define DEBUG_VECTOR_VADDR XCHAL_DEBUG_VECTOR_VADDR
#endif #endif
......
...@@ -18,7 +18,8 @@ ...@@ -18,7 +18,8 @@
# define __XTENSA_UL_CONST(x) x # define __XTENSA_UL_CONST(x) x
#else #else
# define __XTENSA_UL(x) ((unsigned long)(x)) # define __XTENSA_UL(x) ((unsigned long)(x))
# define __XTENSA_UL_CONST(x) x##UL # define ___XTENSA_UL_CONST(x) x##UL
# define __XTENSA_UL_CONST(x) ___XTENSA_UL_CONST(x)
#endif #endif
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
......
...@@ -754,7 +754,20 @@ __SYSCALL(340, sys_bpf, 3) ...@@ -754,7 +754,20 @@ __SYSCALL(340, sys_bpf, 3)
#define __NR_execveat 341 #define __NR_execveat 341
__SYSCALL(341, sys_execveat, 5) __SYSCALL(341, sys_execveat, 5)
#define __NR_syscall_count 342 #define __NR_userfaultfd 342
__SYSCALL(342, sys_userfaultfd, 1)
#define __NR_membarrier 343
__SYSCALL(343, sys_membarrier, 2)
#define __NR_mlock2 344
__SYSCALL(344, sys_mlock2, 3)
#define __NR_copy_file_range 345
__SYSCALL(345, sys_copy_file_range, 6)
#define __NR_preadv2 346
__SYSCALL(346, sys_preadv2, 6)
#define __NR_pwritev2 347
__SYSCALL(347, sys_pwritev2, 6)
#define __NR_syscall_count 348
/* /*
* sysxtensa syscall handler * sysxtensa syscall handler
......
...@@ -1632,10 +1632,11 @@ ENTRY(fast_second_level_miss) ...@@ -1632,10 +1632,11 @@ ENTRY(fast_second_level_miss)
* The messy computation for 'pteval' above really simplifies * The messy computation for 'pteval' above really simplifies
* into the following: * into the following:
* *
* pteval = ((pmdval - PAGE_OFFSET) & PAGE_MASK) | PAGE_DIRECTORY * pteval = ((pmdval - PAGE_OFFSET + PHYS_OFFSET) & PAGE_MASK)
* | PAGE_DIRECTORY
*/ */
movi a1, (-PAGE_OFFSET) & 0xffffffff movi a1, (PHYS_OFFSET - PAGE_OFFSET) & 0xffffffff
add a0, a0, a1 # pmdval - PAGE_OFFSET add a0, a0, a1 # pmdval - PAGE_OFFSET
extui a1, a0, 0, PAGE_SHIFT # ... & PAGE_MASK extui a1, a0, 0, PAGE_SHIFT # ... & PAGE_MASK
xor a0, a0, a1 xor a0, a0, a1
......
...@@ -113,7 +113,7 @@ ENTRY(_startup) ...@@ -113,7 +113,7 @@ ENTRY(_startup)
movi a0, 0 movi a0, 0
#if XCHAL_HAVE_VECBASE #if XCHAL_HAVE_VECBASE
movi a2, VECBASE_RESET_VADDR movi a2, VECBASE_VADDR
wsr a2, vecbase wsr a2, vecbase
#endif #endif
......
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
* *
* Copyright (C) 1995 Linus Torvalds * Copyright (C) 1995 Linus Torvalds
* Copyright (C) 2001 - 2005 Tensilica Inc. * Copyright (C) 2001 - 2005 Tensilica Inc.
* Copyright (C) 2014 - 2016 Cadence Design Systems Inc.
* *
* Chris Zankel <chris@zankel.net> * Chris Zankel <chris@zankel.net>
* Joe Taylor <joe@tensilica.com, joetylr@yahoo.com> * Joe Taylor <joe@tensilica.com, joetylr@yahoo.com>
...@@ -22,7 +23,6 @@ ...@@ -22,7 +23,6 @@
#include <linux/bootmem.h> #include <linux/bootmem.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/percpu.h> #include <linux/percpu.h>
#include <linux/clk-provider.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_fdt.h> #include <linux/of_fdt.h>
...@@ -114,7 +114,7 @@ static int __init parse_tag_mem(const bp_tag_t *tag) ...@@ -114,7 +114,7 @@ static int __init parse_tag_mem(const bp_tag_t *tag)
if (mi->type != MEMORY_TYPE_CONVENTIONAL) if (mi->type != MEMORY_TYPE_CONVENTIONAL)
return -1; return -1;
return add_sysmem_bank(mi->start, mi->end); return memblock_add(mi->start, mi->end - mi->start);
} }
__tagtable(BP_TAG_MEMORY, parse_tag_mem); __tagtable(BP_TAG_MEMORY, parse_tag_mem);
...@@ -188,7 +188,6 @@ static int __init parse_bootparam(const bp_tag_t* tag) ...@@ -188,7 +188,6 @@ static int __init parse_bootparam(const bp_tag_t* tag)
} }
#ifdef CONFIG_OF #ifdef CONFIG_OF
bool __initdata dt_memory_scan = false;
#if !XCHAL_HAVE_PTP_MMU || XCHAL_HAVE_SPANNING_WAY #if !XCHAL_HAVE_PTP_MMU || XCHAL_HAVE_SPANNING_WAY
unsigned long xtensa_kio_paddr = XCHAL_KIO_DEFAULT_PADDR; unsigned long xtensa_kio_paddr = XCHAL_KIO_DEFAULT_PADDR;
...@@ -228,11 +227,8 @@ static int __init xtensa_dt_io_area(unsigned long node, const char *uname, ...@@ -228,11 +227,8 @@ static int __init xtensa_dt_io_area(unsigned long node, const char *uname,
void __init early_init_dt_add_memory_arch(u64 base, u64 size) void __init early_init_dt_add_memory_arch(u64 base, u64 size)
{ {
if (!dt_memory_scan)
return;
size &= PAGE_MASK; size &= PAGE_MASK;
add_sysmem_bank(base, base + size); memblock_add(base, size);
} }
void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align) void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align)
...@@ -242,9 +238,6 @@ void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align) ...@@ -242,9 +238,6 @@ void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align)
void __init early_init_devtree(void *params) void __init early_init_devtree(void *params)
{ {
if (sysmem.nr_banks == 0)
dt_memory_scan = true;
early_init_dt_scan(params); early_init_dt_scan(params);
of_scan_flat_dt(xtensa_dt_io_area, NULL); of_scan_flat_dt(xtensa_dt_io_area, NULL);
...@@ -252,14 +245,6 @@ void __init early_init_devtree(void *params) ...@@ -252,14 +245,6 @@ void __init early_init_devtree(void *params)
strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE); strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
} }
static int __init xtensa_device_probe(void)
{
of_clk_init(NULL);
return 0;
}
device_initcall(xtensa_device_probe);
#endif /* CONFIG_OF */ #endif /* CONFIG_OF */
/* /*
...@@ -277,12 +262,6 @@ void __init init_arch(bp_tag_t *bp_start) ...@@ -277,12 +262,6 @@ void __init init_arch(bp_tag_t *bp_start)
early_init_devtree(dtb_start); early_init_devtree(dtb_start);
#endif #endif
if (sysmem.nr_banks == 0) {
add_sysmem_bank(PLATFORM_DEFAULT_MEM_START,
PLATFORM_DEFAULT_MEM_START +
PLATFORM_DEFAULT_MEM_SIZE);
}
#ifdef CONFIG_CMDLINE_BOOL #ifdef CONFIG_CMDLINE_BOOL
if (!command_line[0]) if (!command_line[0])
strlcpy(command_line, default_command_line, COMMAND_LINE_SIZE); strlcpy(command_line, default_command_line, COMMAND_LINE_SIZE);
...@@ -452,6 +431,10 @@ static int __init check_s32c1i(void) ...@@ -452,6 +431,10 @@ static int __init check_s32c1i(void)
early_initcall(check_s32c1i); early_initcall(check_s32c1i);
#endif /* CONFIG_S32C1I_SELFTEST */ #endif /* CONFIG_S32C1I_SELFTEST */
static inline int mem_reserve(unsigned long start, unsigned long end)
{
return memblock_reserve(start, end - start);
}
void __init setup_arch(char **cmdline_p) void __init setup_arch(char **cmdline_p)
{ {
...@@ -463,54 +446,54 @@ void __init setup_arch(char **cmdline_p) ...@@ -463,54 +446,54 @@ void __init setup_arch(char **cmdline_p)
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
if (initrd_start < initrd_end) { if (initrd_start < initrd_end) {
initrd_is_mapped = mem_reserve(__pa(initrd_start), initrd_is_mapped = mem_reserve(__pa(initrd_start),
__pa(initrd_end), 0) == 0; __pa(initrd_end)) == 0;
initrd_below_start_ok = 1; initrd_below_start_ok = 1;
} else { } else {
initrd_start = 0; initrd_start = 0;
} }
#endif #endif
mem_reserve(__pa(&_stext),__pa(&_end), 1); mem_reserve(__pa(&_stext), __pa(&_end));
mem_reserve(__pa(&_WindowVectors_text_start), mem_reserve(__pa(&_WindowVectors_text_start),
__pa(&_WindowVectors_text_end), 0); __pa(&_WindowVectors_text_end));
mem_reserve(__pa(&_DebugInterruptVector_literal_start), mem_reserve(__pa(&_DebugInterruptVector_literal_start),
__pa(&_DebugInterruptVector_text_end), 0); __pa(&_DebugInterruptVector_text_end));
mem_reserve(__pa(&_KernelExceptionVector_literal_start), mem_reserve(__pa(&_KernelExceptionVector_literal_start),
__pa(&_KernelExceptionVector_text_end), 0); __pa(&_KernelExceptionVector_text_end));
mem_reserve(__pa(&_UserExceptionVector_literal_start), mem_reserve(__pa(&_UserExceptionVector_literal_start),
__pa(&_UserExceptionVector_text_end), 0); __pa(&_UserExceptionVector_text_end));
mem_reserve(__pa(&_DoubleExceptionVector_literal_start), mem_reserve(__pa(&_DoubleExceptionVector_literal_start),
__pa(&_DoubleExceptionVector_text_end), 0); __pa(&_DoubleExceptionVector_text_end));
#if XCHAL_EXCM_LEVEL >= 2 #if XCHAL_EXCM_LEVEL >= 2
mem_reserve(__pa(&_Level2InterruptVector_text_start), mem_reserve(__pa(&_Level2InterruptVector_text_start),
__pa(&_Level2InterruptVector_text_end), 0); __pa(&_Level2InterruptVector_text_end));
#endif #endif
#if XCHAL_EXCM_LEVEL >= 3 #if XCHAL_EXCM_LEVEL >= 3
mem_reserve(__pa(&_Level3InterruptVector_text_start), mem_reserve(__pa(&_Level3InterruptVector_text_start),
__pa(&_Level3InterruptVector_text_end), 0); __pa(&_Level3InterruptVector_text_end));
#endif #endif
#if XCHAL_EXCM_LEVEL >= 4 #if XCHAL_EXCM_LEVEL >= 4
mem_reserve(__pa(&_Level4InterruptVector_text_start), mem_reserve(__pa(&_Level4InterruptVector_text_start),
__pa(&_Level4InterruptVector_text_end), 0); __pa(&_Level4InterruptVector_text_end));
#endif #endif
#if XCHAL_EXCM_LEVEL >= 5 #if XCHAL_EXCM_LEVEL >= 5
mem_reserve(__pa(&_Level5InterruptVector_text_start), mem_reserve(__pa(&_Level5InterruptVector_text_start),
__pa(&_Level5InterruptVector_text_end), 0); __pa(&_Level5InterruptVector_text_end));
#endif #endif
#if XCHAL_EXCM_LEVEL >= 6 #if XCHAL_EXCM_LEVEL >= 6
mem_reserve(__pa(&_Level6InterruptVector_text_start), mem_reserve(__pa(&_Level6InterruptVector_text_start),
__pa(&_Level6InterruptVector_text_end), 0); __pa(&_Level6InterruptVector_text_end));
#endif #endif
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
mem_reserve(__pa(&_SecondaryResetVector_text_start), mem_reserve(__pa(&_SecondaryResetVector_text_start),
__pa(&_SecondaryResetVector_text_end), 0); __pa(&_SecondaryResetVector_text_end));
#endif #endif
parse_early_param(); parse_early_param();
bootmem_init(); bootmem_init();
...@@ -555,6 +538,137 @@ static int __init topology_init(void) ...@@ -555,6 +538,137 @@ static int __init topology_init(void)
} }
subsys_initcall(topology_init); subsys_initcall(topology_init);
void cpu_reset(void)
{
#if XCHAL_HAVE_PTP_MMU
local_irq_disable();
/*
* We have full MMU: all autoload ways, ways 7, 8 and 9 of DTLB must
* be flushed.
* Way 4 is not currently used by linux.
* Ways 5 and 6 shall not be touched on MMUv2 as they are hardwired.
* Way 5 shall be flushed and way 6 shall be set to identity mapping
* on MMUv3.
*/
local_flush_tlb_all();
invalidate_page_directory();
#if XCHAL_HAVE_SPANNING_WAY
/* MMU v3 */
{
unsigned long vaddr = (unsigned long)cpu_reset;
unsigned long paddr = __pa(vaddr);
unsigned long tmpaddr = vaddr + SZ_512M;
unsigned long tmp0, tmp1, tmp2, tmp3;
/*
* Find a place for the temporary mapping. It must not be
* in the same 512MB region with vaddr or paddr, otherwise
* there may be multihit exception either on entry to the
* temporary mapping, or on entry to the identity mapping.
* (512MB is the biggest page size supported by TLB.)
*/
while (((tmpaddr ^ paddr) & -SZ_512M) == 0)
tmpaddr += SZ_512M;
/* Invalidate mapping in the selected temporary area */
if (itlb_probe(tmpaddr) & 0x8)
invalidate_itlb_entry(itlb_probe(tmpaddr));
if (itlb_probe(tmpaddr + PAGE_SIZE) & 0x8)
invalidate_itlb_entry(itlb_probe(tmpaddr + PAGE_SIZE));
/*
* Map two consecutive pages starting at the physical address
* of this function to the temporary mapping area.
*/
write_itlb_entry(__pte((paddr & PAGE_MASK) |
_PAGE_HW_VALID |
_PAGE_HW_EXEC |
_PAGE_CA_BYPASS),
tmpaddr & PAGE_MASK);
write_itlb_entry(__pte(((paddr & PAGE_MASK) + PAGE_SIZE) |
_PAGE_HW_VALID |
_PAGE_HW_EXEC |
_PAGE_CA_BYPASS),
(tmpaddr & PAGE_MASK) + PAGE_SIZE);
/* Reinitialize TLB */
__asm__ __volatile__ ("movi %0, 1f\n\t"
"movi %3, 2f\n\t"
"add %0, %0, %4\n\t"
"add %3, %3, %5\n\t"
"jx %0\n"
/*
* No literal, data or stack access
* below this point
*/
"1:\n\t"
/* Initialize *tlbcfg */
"movi %0, 0\n\t"
"wsr %0, itlbcfg\n\t"
"wsr %0, dtlbcfg\n\t"
/* Invalidate TLB way 5 */
"movi %0, 4\n\t"
"movi %1, 5\n"
"1:\n\t"
"iitlb %1\n\t"
"idtlb %1\n\t"
"add %1, %1, %6\n\t"
"addi %0, %0, -1\n\t"
"bnez %0, 1b\n\t"
/* Initialize TLB way 6 */
"movi %0, 7\n\t"
"addi %1, %9, 3\n\t"
"addi %2, %9, 6\n"
"1:\n\t"
"witlb %1, %2\n\t"
"wdtlb %1, %2\n\t"
"add %1, %1, %7\n\t"
"add %2, %2, %7\n\t"
"addi %0, %0, -1\n\t"
"bnez %0, 1b\n\t"
/* Jump to identity mapping */
"jx %3\n"
"2:\n\t"
/* Complete way 6 initialization */
"witlb %1, %2\n\t"
"wdtlb %1, %2\n\t"
/* Invalidate temporary mapping */
"sub %0, %9, %7\n\t"
"iitlb %0\n\t"
"add %0, %0, %8\n\t"
"iitlb %0"
: "=&a"(tmp0), "=&a"(tmp1), "=&a"(tmp2),
"=&a"(tmp3)
: "a"(tmpaddr - vaddr),
"a"(paddr - vaddr),
"a"(SZ_128M), "a"(SZ_512M),
"a"(PAGE_SIZE),
"a"((tmpaddr + SZ_512M) & PAGE_MASK)
: "memory");
}
#endif
#endif
__asm__ __volatile__ ("movi a2, 0\n\t"
"wsr a2, icountlevel\n\t"
"movi a2, 0\n\t"
"wsr a2, icount\n\t"
#if XCHAL_NUM_IBREAK > 0
"wsr a2, ibreakenable\n\t"
#endif
#if XCHAL_HAVE_LOOPS
"wsr a2, lcount\n\t"
#endif
"movi a2, 0x1f\n\t"
"wsr a2, ps\n\t"
"isync\n\t"
"jx %0\n\t"
:
: "a" (XCHAL_RESET_VECTOR_VADDR)
: "a2");
for (;;)
;
}
void machine_restart(char * cmd) void machine_restart(char * cmd)
{ {
platform_restart(); platform_restart();
......
...@@ -12,6 +12,8 @@ ...@@ -12,6 +12,8 @@
* Chris Zankel <chris@zankel.net> * Chris Zankel <chris@zankel.net>
*/ */
#include <linux/clk.h>
#include <linux/clk-provider.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/time.h> #include <linux/time.h>
...@@ -134,16 +136,52 @@ void local_timer_setup(unsigned cpu) ...@@ -134,16 +136,52 @@ void local_timer_setup(unsigned cpu)
0xf, 0xffffffff); 0xf, 0xffffffff);
} }
#ifdef CONFIG_XTENSA_CALIBRATE_CCOUNT
#ifdef CONFIG_OF
static void __init calibrate_ccount(void)
{
struct device_node *cpu;
struct clk *clk;
cpu = of_find_compatible_node(NULL, NULL, "cdns,xtensa-cpu");
if (cpu) {
clk = of_clk_get(cpu, 0);
if (!IS_ERR(clk)) {
ccount_freq = clk_get_rate(clk);
return;
} else {
pr_warn("%s: CPU input clock not found\n",
__func__);
}
} else {
pr_warn("%s: CPU node not found in the device tree\n",
__func__);
}
platform_calibrate_ccount();
}
#else
static inline void calibrate_ccount(void)
{
platform_calibrate_ccount();
}
#endif
#endif
void __init time_init(void) void __init time_init(void)
{ {
of_clk_init(NULL);
#ifdef CONFIG_XTENSA_CALIBRATE_CCOUNT #ifdef CONFIG_XTENSA_CALIBRATE_CCOUNT
printk("Calibrating CPU frequency "); printk("Calibrating CPU frequency ");
platform_calibrate_ccount(); calibrate_ccount();
printk("%d.%02d MHz\n", (int)ccount_freq/1000000, printk("%d.%02d MHz\n", (int)ccount_freq/1000000,
(int)(ccount_freq/10000)%100); (int)(ccount_freq/10000)%100);
#else #else
ccount_freq = CONFIG_XTENSA_CPU_CLOCK*1000000UL; ccount_freq = CONFIG_XTENSA_CPU_CLOCK*1000000UL;
#endif #endif
WARN(!ccount_freq,
"%s: CPU clock frequency is not set up correctly\n",
__func__);
clocksource_register_hz(&ccount_clocksource, ccount_freq); clocksource_register_hz(&ccount_clocksource, ccount_freq);
local_timer_setup(0); local_timer_setup(0);
setup_irq(this_cpu_ptr(&ccount_timer)->evt.irq, &timer_irqaction); setup_irq(this_cpu_ptr(&ccount_timer)->evt.irq, &timer_irqaction);
......
...@@ -30,10 +30,6 @@ jiffies = jiffies_64 + 4; ...@@ -30,10 +30,6 @@ jiffies = jiffies_64 + 4;
jiffies = jiffies_64; jiffies = jiffies_64;
#endif #endif
#ifndef KERNELOFFSET
#define KERNELOFFSET 0xd0003000
#endif
/* Note: In the following macros, it would be nice to specify only the /* Note: In the following macros, it would be nice to specify only the
vector name and section kind and construct "sym" and "section" using vector name and section kind and construct "sym" and "section" using
CPP concatenation, but that does not work reliably. Concatenating a CPP concatenation, but that does not work reliably. Concatenating a
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
* for more details. * for more details.
* *
* Copyright (C) 2001 - 2005 Tensilica Inc. * Copyright (C) 2001 - 2005 Tensilica Inc.
* Copyright (C) 2014 Cadence Design Systems Inc. * Copyright (C) 2014 - 2016 Cadence Design Systems Inc.
* *
* Chris Zankel <chris@zankel.net> * Chris Zankel <chris@zankel.net>
* Joe Taylor <joe@tensilica.com, joetylr@yahoo.com> * Joe Taylor <joe@tensilica.com, joetylr@yahoo.com>
...@@ -25,284 +25,43 @@ ...@@ -25,284 +25,43 @@
#include <linux/mman.h> #include <linux/mman.h>
#include <linux/nodemask.h> #include <linux/nodemask.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/of_fdt.h>
#include <asm/bootparam.h> #include <asm/bootparam.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/sections.h> #include <asm/sections.h>
#include <asm/sysmem.h> #include <asm/sysmem.h>
struct sysmem_info sysmem __initdata;
static void __init sysmem_dump(void)
{
unsigned i;
pr_debug("Sysmem:\n");
for (i = 0; i < sysmem.nr_banks; ++i)
pr_debug(" 0x%08lx - 0x%08lx (%ldK)\n",
sysmem.bank[i].start, sysmem.bank[i].end,
(sysmem.bank[i].end - sysmem.bank[i].start) >> 10);
}
/*
* Find bank with maximal .start such that bank.start <= start
*/
static inline struct meminfo * __init find_bank(unsigned long start)
{
unsigned i;
struct meminfo *it = NULL;
for (i = 0; i < sysmem.nr_banks; ++i)
if (sysmem.bank[i].start <= start)
it = sysmem.bank + i;
else
break;
return it;
}
/*
* Move all memory banks starting at 'from' to a new place at 'to',
* adjust nr_banks accordingly.
* Both 'from' and 'to' must be inside the sysmem.bank.
*
* Returns: 0 (success), -ENOMEM (not enough space in the sysmem.bank).
*/
static int __init move_banks(struct meminfo *to, struct meminfo *from)
{
unsigned n = sysmem.nr_banks - (from - sysmem.bank);
if (to > from && to - from + sysmem.nr_banks > SYSMEM_BANKS_MAX)
return -ENOMEM;
if (to != from)
memmove(to, from, n * sizeof(struct meminfo));
sysmem.nr_banks += to - from;
return 0;
}
/*
* Add new bank to sysmem. Resulting sysmem is the union of bytes of the
* original sysmem and the new bank.
*
* Returns: 0 (success), < 0 (error)
*/
int __init add_sysmem_bank(unsigned long start, unsigned long end)
{
unsigned i;
struct meminfo *it = NULL;
unsigned long sz;
unsigned long bank_sz = 0;
if (start == end ||
(start < end) != (PAGE_ALIGN(start) < (end & PAGE_MASK))) {
pr_warn("Ignoring small memory bank 0x%08lx size: %ld bytes\n",
start, end - start);
return -EINVAL;
}
start = PAGE_ALIGN(start);
end &= PAGE_MASK;
sz = end - start;
it = find_bank(start);
if (it)
bank_sz = it->end - it->start;
if (it && bank_sz >= start - it->start) {
if (end - it->start > bank_sz)
it->end = end;
else
return 0;
} else {
if (!it)
it = sysmem.bank;
else
++it;
if (it - sysmem.bank < sysmem.nr_banks &&
it->start - start <= sz) {
it->start = start;
if (it->end - it->start < sz)
it->end = end;
else
return 0;
} else {
if (move_banks(it + 1, it) < 0) {
pr_warn("Ignoring memory bank 0x%08lx size %ld bytes\n",
start, end - start);
return -EINVAL;
}
it->start = start;
it->end = end;
return 0;
}
}
sz = it->end - it->start;
for (i = it + 1 - sysmem.bank; i < sysmem.nr_banks; ++i)
if (sysmem.bank[i].start - it->start <= sz) {
if (sz < sysmem.bank[i].end - it->start)
it->end = sysmem.bank[i].end;
} else {
break;
}
move_banks(it + 1, sysmem.bank + i);
return 0;
}
/*
* mem_reserve(start, end, must_exist)
*
* Reserve some memory from the memory pool.
* If must_exist is set and a part of the region being reserved does not exist
* memory map is not altered.
*
* Parameters:
* start Start of region,
* end End of region,
* must_exist Must exist in memory pool.
*
* Returns:
* 0 (success)
* < 0 (error)
*/
int __init mem_reserve(unsigned long start, unsigned long end, int must_exist)
{
struct meminfo *it;
struct meminfo *rm = NULL;
unsigned long sz;
unsigned long bank_sz = 0;
start = start & PAGE_MASK;
end = PAGE_ALIGN(end);
sz = end - start;
if (!sz)
return -EINVAL;
it = find_bank(start);
if (it)
bank_sz = it->end - it->start;
if ((!it || end - it->start > bank_sz) && must_exist) {
pr_warn("mem_reserve: [0x%0lx, 0x%0lx) not in any region!\n",
start, end);
return -EINVAL;
}
if (it && start - it->start <= bank_sz) {
if (start == it->start) {
if (end - it->start < bank_sz) {
it->start = end;
return 0;
} else {
rm = it;
}
} else {
it->end = start;
if (end - it->start < bank_sz)
return add_sysmem_bank(end,
it->start + bank_sz);
++it;
}
}
if (!it)
it = sysmem.bank;
for (; it < sysmem.bank + sysmem.nr_banks; ++it) {
if (it->end - start <= sz) {
if (!rm)
rm = it;
} else {
if (it->start - start < sz)
it->start = end;
break;
}
}
if (rm)
move_banks(rm, it);
return 0;
}
/* /*
* Initialize the bootmem system and give it all low memory we have available. * Initialize the bootmem system and give it all low memory we have available.
*/ */
void __init bootmem_init(void) void __init bootmem_init(void)
{ {
unsigned long pfn; /* Reserve all memory below PHYS_OFFSET, as memory
unsigned long bootmap_start, bootmap_size;
int i;
/* Reserve all memory below PLATFORM_DEFAULT_MEM_START, as memory
* accounting doesn't work for pages below that address. * accounting doesn't work for pages below that address.
* *
* If PLATFORM_DEFAULT_MEM_START is zero reserve page at address 0: * If PHYS_OFFSET is zero reserve page at address 0:
* successfull allocations should never return NULL. * successfull allocations should never return NULL.
*/ */
if (PLATFORM_DEFAULT_MEM_START) if (PHYS_OFFSET)
mem_reserve(0, PLATFORM_DEFAULT_MEM_START, 0); memblock_reserve(0, PHYS_OFFSET);
else else
mem_reserve(0, 1, 0); memblock_reserve(0, 1);
sysmem_dump(); early_init_fdt_scan_reserved_mem();
max_low_pfn = max_pfn = 0;
min_low_pfn = ~0;
for (i=0; i < sysmem.nr_banks; i++) {
pfn = PAGE_ALIGN(sysmem.bank[i].start) >> PAGE_SHIFT;
if (pfn < min_low_pfn)
min_low_pfn = pfn;
pfn = PAGE_ALIGN(sysmem.bank[i].end - 1) >> PAGE_SHIFT;
if (pfn > max_pfn)
max_pfn = pfn;
}
if (min_low_pfn > max_pfn) if (!memblock_phys_mem_size())
panic("No memory found!\n"); panic("No memory found!\n");
max_low_pfn = max_pfn < MAX_MEM_PFN >> PAGE_SHIFT ? min_low_pfn = PFN_UP(memblock_start_of_DRAM());
max_pfn : MAX_MEM_PFN >> PAGE_SHIFT; min_low_pfn = max(min_low_pfn, PFN_UP(PHYS_OFFSET));
max_pfn = PFN_DOWN(memblock_end_of_DRAM());
max_low_pfn = min(max_pfn, MAX_LOW_PFN);
/* Find an area to use for the bootmem bitmap. */ memblock_set_current_limit(PFN_PHYS(max_low_pfn));
bootmap_size = bootmem_bootmap_pages(max_low_pfn - min_low_pfn);
bootmap_size <<= PAGE_SHIFT;
bootmap_start = ~0;
for (i=0; i<sysmem.nr_banks; i++)
if (sysmem.bank[i].end - sysmem.bank[i].start >= bootmap_size) {
bootmap_start = sysmem.bank[i].start;
break;
}
if (bootmap_start == ~0UL)
panic("Cannot find %ld bytes for bootmap\n", bootmap_size);
/* Reserve the bootmem bitmap area */
mem_reserve(bootmap_start, bootmap_start + bootmap_size, 1);
bootmap_size = init_bootmem_node(NODE_DATA(0),
bootmap_start >> PAGE_SHIFT,
min_low_pfn,
max_low_pfn);
/* Add all remaining memory pieces into the bootmem map */
for (i = 0; i < sysmem.nr_banks; i++) {
if (sysmem.bank[i].start >> PAGE_SHIFT < max_low_pfn) {
unsigned long end = min(max_low_pfn << PAGE_SHIFT,
sysmem.bank[i].end);
free_bootmem(sysmem.bank[i].start,
end - sysmem.bank[i].start);
}
}
memblock_dump_all();
} }
...@@ -344,7 +103,7 @@ void __init mem_init(void) ...@@ -344,7 +103,7 @@ void __init mem_init(void)
" fixmap : 0x%08lx - 0x%08lx (%5lu kB)\n" " fixmap : 0x%08lx - 0x%08lx (%5lu kB)\n"
#endif #endif
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
" vmalloc : 0x%08x - 0x%08x (%5u MB)\n" " vmalloc : 0x%08lx - 0x%08lx (%5lu MB)\n"
#endif #endif
" lowmem : 0x%08lx - 0x%08lx (%5lu MB)\n", " lowmem : 0x%08lx - 0x%08lx (%5lu MB)\n",
#ifdef CONFIG_HIGHMEM #ifdef CONFIG_HIGHMEM
...@@ -395,16 +154,16 @@ static void __init parse_memmap_one(char *p) ...@@ -395,16 +154,16 @@ static void __init parse_memmap_one(char *p)
switch (*p) { switch (*p) {
case '@': case '@':
start_at = memparse(p + 1, &p); start_at = memparse(p + 1, &p);
add_sysmem_bank(start_at, start_at + mem_size); memblock_add(start_at, mem_size);
break; break;
case '$': case '$':
start_at = memparse(p + 1, &p); start_at = memparse(p + 1, &p);
mem_reserve(start_at, start_at + mem_size, 0); memblock_reserve(start_at, mem_size);
break; break;
case 0: case 0:
mem_reserve(mem_size, 0, 0); memblock_reserve(mem_size, -mem_size);
break; break;
default: default:
......
...@@ -76,6 +76,11 @@ static inline int __simc(int a, int b, int c, int d) ...@@ -76,6 +76,11 @@ static inline int __simc(int a, int b, int c, int d)
return ret; return ret;
} }
static inline int simc_exit(int exit_code)
{
return __simc(SYS_exit, exit_code, 0, 0);
}
static inline int simc_open(const char *file, int flags, int mode) static inline int simc_open(const char *file, int flags, int mode)
{ {
return __simc(SYS_open, (int) file, flags, mode); return __simc(SYS_open, (int) file, flags, mode);
......
...@@ -32,6 +32,8 @@ ...@@ -32,6 +32,8 @@
#include <asm/platform.h> #include <asm/platform.h>
#include <asm/bootparam.h> #include <asm/bootparam.h>
#include <platform/simcall.h>
void __init platform_init(bp_tag_t* bootparam) void __init platform_init(bp_tag_t* bootparam)
{ {
...@@ -41,37 +43,19 @@ void __init platform_init(bp_tag_t* bootparam) ...@@ -41,37 +43,19 @@ void __init platform_init(bp_tag_t* bootparam)
void platform_halt(void) void platform_halt(void)
{ {
pr_info(" ** Called platform_halt() **\n"); pr_info(" ** Called platform_halt() **\n");
__asm__ __volatile__("movi a2, 1\nsimcall\n"); simc_exit(0);
} }
void platform_power_off(void) void platform_power_off(void)
{ {
pr_info(" ** Called platform_power_off() **\n"); pr_info(" ** Called platform_power_off() **\n");
__asm__ __volatile__("movi a2, 1\nsimcall\n"); simc_exit(0);
} }
void platform_restart(void) void platform_restart(void)
{ {
/* Flush and reset the mmu, simulate a processor reset, and /* Flush and reset the mmu, simulate a processor reset, and
* jump to the reset vector. */ * jump to the reset vector. */
cpu_reset();
__asm__ __volatile__("movi a2, 15\n\t"
"wsr a2, icountlevel\n\t"
"movi a2, 0\n\t"
"wsr a2, icount\n\t"
#if XCHAL_NUM_IBREAK > 0
"wsr a2, ibreakenable\n\t"
#endif
#if XCHAL_HAVE_LOOPS
"wsr a2, lcount\n\t"
#endif
"movi a2, 0x1f\n\t"
"wsr a2, ps\n\t"
"isync\n\t"
"jx %0\n\t"
:
: "a" (XCHAL_RESET_VECTOR_VADDR)
: "a2");
/* control never gets here */ /* control never gets here */
} }
...@@ -98,7 +82,7 @@ void platform_heartbeat(void) ...@@ -98,7 +82,7 @@ void platform_heartbeat(void)
static int static int
iss_panic_event(struct notifier_block *this, unsigned long event, void *ptr) iss_panic_event(struct notifier_block *this, unsigned long event, void *ptr)
{ {
__asm__ __volatile__("movi a2, -1; simcall\n"); simc_exit(1);
return NOTIFY_DONE; return NOTIFY_DONE;
} }
......
...@@ -86,6 +86,7 @@ static void simdisk_transfer(struct simdisk *dev, unsigned long sector, ...@@ -86,6 +86,7 @@ static void simdisk_transfer(struct simdisk *dev, unsigned long sector,
unsigned long io; unsigned long io;
simc_lseek(dev->fd, offset, SEEK_SET); simc_lseek(dev->fd, offset, SEEK_SET);
READ_ONCE(*buffer);
if (write) if (write)
io = simc_write(dev->fd, buffer, nbytes); io = simc_write(dev->fd, buffer, nbytes);
else else
......
...@@ -64,26 +64,7 @@ void platform_restart(void) ...@@ -64,26 +64,7 @@ void platform_restart(void)
{ {
/* Flush and reset the mmu, simulate a processor reset, and /* Flush and reset the mmu, simulate a processor reset, and
* jump to the reset vector. */ * jump to the reset vector. */
cpu_reset();
__asm__ __volatile__ ("movi a2, 15\n\t"
"wsr a2, icountlevel\n\t"
"movi a2, 0\n\t"
"wsr a2, icount\n\t"
#if XCHAL_NUM_IBREAK > 0
"wsr a2, ibreakenable\n\t"
#endif
#if XCHAL_HAVE_LOOPS
"wsr a2, lcount\n\t"
#endif
"movi a2, 0x1f\n\t"
"wsr a2, ps\n\t"
"isync\n\t"
"jx %0\n\t"
:
: "a" (XCHAL_RESET_VECTOR_VADDR)
: "a2"
);
/* control never gets here */ /* control never gets here */
} }
......
...@@ -26,6 +26,8 @@ ...@@ -26,6 +26,8 @@
#include <linux/console.h> #include <linux/console.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/clk-provider.h>
#include <linux/of_address.h>
#include <asm/timex.h> #include <asm/timex.h>
#include <asm/processor.h> #include <asm/processor.h>
...@@ -54,58 +56,63 @@ void platform_restart(void) ...@@ -54,58 +56,63 @@ void platform_restart(void)
{ {
/* Flush and reset the mmu, simulate a processor reset, and /* Flush and reset the mmu, simulate a processor reset, and
* jump to the reset vector. */ * jump to the reset vector. */
cpu_reset();
/* control never gets here */
}
void __init platform_setup(char **cmdline)
{
}
__asm__ __volatile__ ("movi a2, 15\n\t" /* early initialization */
"wsr a2, icountlevel\n\t"
"movi a2, 0\n\t"
"wsr a2, icount\n\t"
#if XCHAL_NUM_IBREAK > 0
"wsr a2, ibreakenable\n\t"
#endif
#if XCHAL_HAVE_LOOPS
"wsr a2, lcount\n\t"
#endif
"movi a2, 0x1f\n\t"
"wsr a2, ps\n\t"
"isync\n\t"
"jx %0\n\t"
:
: "a" (XCHAL_RESET_VECTOR_VADDR)
: "a2"
);
/* control never gets here */ void __init platform_init(bp_tag_t *first)
{
} }
void __init platform_setup(char **cmdline) /* Heartbeat. */
void platform_heartbeat(void)
{
}
#ifdef CONFIG_XTENSA_CALIBRATE_CCOUNT
void __init platform_calibrate_ccount(void)
{ {
ccount_freq = *(long *)XTFPGA_CLKFRQ_VADDR;
} }
#endif
#ifdef CONFIG_OF #ifdef CONFIG_OF
static void __init update_clock_frequency(struct device_node *node) static void __init xtfpga_clk_setup(struct device_node *np)
{ {
struct property *newfreq; void __iomem *base = of_iomap(np, 0);
struct clk *clk;
u32 freq; u32 freq;
if (!of_property_read_u32(node, "clock-frequency", &freq) && freq != 0) if (!base) {
pr_err("%s: invalid address\n", np->name);
return; return;
}
newfreq = kzalloc(sizeof(*newfreq) + sizeof(u32), GFP_KERNEL); freq = __raw_readl(base);
if (!newfreq) iounmap(base);
return; clk = clk_register_fixed_rate(NULL, np->name, NULL, 0, freq);
newfreq->value = newfreq + 1;
newfreq->length = sizeof(freq); if (IS_ERR(clk)) {
newfreq->name = kstrdup("clock-frequency", GFP_KERNEL); pr_err("%s: clk registration failed\n", np->name);
if (!newfreq->name) {
kfree(newfreq);
return; return;
} }
*(u32 *)newfreq->value = cpu_to_be32(*(u32 *)XTFPGA_CLKFRQ_VADDR); if (of_clk_add_provider(np, of_clk_src_simple_get, clk)) {
of_update_property(node, newfreq); pr_err("%s: clk provider registration failed\n", np->name);
return;
}
} }
CLK_OF_DECLARE(xtfpga_clk, "cdns,xtfpga-clock", xtfpga_clk_setup);
#define MAC_LEN 6 #define MAC_LEN 6
static void __init update_local_mac(struct device_node *node) static void __init update_local_mac(struct device_node *node)
...@@ -137,56 +144,15 @@ static void __init update_local_mac(struct device_node *node) ...@@ -137,56 +144,15 @@ static void __init update_local_mac(struct device_node *node)
static int __init machine_setup(void) static int __init machine_setup(void)
{ {
struct device_node *clock;
struct device_node *eth = NULL; struct device_node *eth = NULL;
for_each_node_by_name(clock, "main-oscillator")
update_clock_frequency(clock);
if ((eth = of_find_compatible_node(eth, NULL, "opencores,ethoc"))) if ((eth = of_find_compatible_node(eth, NULL, "opencores,ethoc")))
update_local_mac(eth); update_local_mac(eth);
return 0; return 0;
} }
arch_initcall(machine_setup); arch_initcall(machine_setup);
#endif #else
/* early initialization */
void __init platform_init(bp_tag_t *first)
{
}
/* Heartbeat. */
void platform_heartbeat(void)
{
}
#ifdef CONFIG_XTENSA_CALIBRATE_CCOUNT
void __init platform_calibrate_ccount(void)
{
long clk_freq = 0;
#ifdef CONFIG_OF
struct device_node *cpu =
of_find_compatible_node(NULL, NULL, "cdns,xtensa-cpu");
if (cpu) {
u32 freq;
update_clock_frequency(cpu);
if (!of_property_read_u32(cpu, "clock-frequency", &freq))
clk_freq = freq;
}
#endif
if (!clk_freq)
clk_freq = *(long *)XTFPGA_CLKFRQ_VADDR;
ccount_freq = clk_freq;
}
#endif
#ifndef CONFIG_OF
#include <linux/serial_8250.h> #include <linux/serial_8250.h>
#include <linux/if.h> #include <linux/if.h>
......
This diff is collapsed.
/*
* tie-asm.h -- compile-time HAL assembler definitions dependent on CORE & TIE
*
* NOTE: This header file is not meant to be included directly.
*/
/* This header file contains assembly-language definitions (assembly
macros, etc.) for this specific Xtensa processor's TIE extensions
and options. It is customized to this Xtensa processor configuration.
Copyright (c) 1999-2015 Cadence Design Systems Inc.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#ifndef _XTENSA_CORE_TIE_ASM_H
#define _XTENSA_CORE_TIE_ASM_H
/* Selection parameter values for save-area save/restore macros: */
/* Option vs. TIE: */
#define XTHAL_SAS_TIE 0x0001 /* custom extension or coprocessor */
#define XTHAL_SAS_OPT 0x0002 /* optional (and not a coprocessor) */
#define XTHAL_SAS_ANYOT 0x0003 /* both of the above */
/* Whether used automatically by compiler: */
#define XTHAL_SAS_NOCC 0x0004 /* not used by compiler w/o special opts/code */
#define XTHAL_SAS_CC 0x0008 /* used by compiler without special opts/code */
#define XTHAL_SAS_ANYCC 0x000C /* both of the above */
/* ABI handling across function calls: */
#define XTHAL_SAS_CALR 0x0010 /* caller-saved */
#define XTHAL_SAS_CALE 0x0020 /* callee-saved */
#define XTHAL_SAS_GLOB 0x0040 /* global across function calls (in thread) */
#define XTHAL_SAS_ANYABI 0x0070 /* all of the above three */
/* Misc */
#define XTHAL_SAS_ALL 0xFFFF /* include all default NCP contents */
#define XTHAL_SAS3(optie,ccuse,abi) ( ((optie) & XTHAL_SAS_ANYOT) \
| ((ccuse) & XTHAL_SAS_ANYCC) \
| ((abi) & XTHAL_SAS_ANYABI) )
/*
* Macro to store all non-coprocessor (extra) custom TIE and optional state
* (not including zero-overhead loop registers).
* Required parameters:
* ptr Save area pointer address register (clobbered)
* (register must contain a 4 byte aligned address).
* at1..at4 Four temporary address registers (first XCHAL_NCP_NUM_ATMPS
* registers are clobbered, the remaining are unused).
* Optional parameters:
* continue If macro invoked as part of a larger store sequence, set to 1
* if this is not the first in the sequence. Defaults to 0.
* ofs Offset from start of larger sequence (from value of first ptr
* in sequence) at which to store. Defaults to next available space
* (or 0 if <continue> is 0).
* select Select what category(ies) of registers to store, as a bitmask
* (see XTHAL_SAS_xxx constants). Defaults to all registers.
* alloc Select what category(ies) of registers to allocate; if any
* category is selected here that is not in <select>, space for
* the corresponding registers is skipped without doing any store.
*/
.macro xchal_ncp_store ptr at1 at2 at3 at4 continue=0 ofs=-1 select=XTHAL_SAS_ALL alloc=0
xchal_sa_start \continue, \ofs
// Optional global registers used by default by the compiler:
.ifeq (XTHAL_SAS_OPT | XTHAL_SAS_CC | XTHAL_SAS_GLOB) & ~(\select)
xchal_sa_align \ptr, 0, 1020, 4, 4
rur.THREADPTR \at1 // threadptr option
s32i \at1, \ptr, .Lxchal_ofs_+0
.set .Lxchal_ofs_, .Lxchal_ofs_ + 4
.elseif ((XTHAL_SAS_OPT | XTHAL_SAS_CC | XTHAL_SAS_GLOB) & ~(\alloc)) == 0
xchal_sa_align \ptr, 0, 1020, 4, 4
.set .Lxchal_ofs_, .Lxchal_ofs_ + 4
.endif
// Optional caller-saved registers used by default by the compiler:
.ifeq (XTHAL_SAS_OPT | XTHAL_SAS_CC | XTHAL_SAS_CALR) & ~(\select)
xchal_sa_align \ptr, 0, 1016, 4, 4
rsr.ACCLO \at1 // MAC16 option
s32i \at1, \ptr, .Lxchal_ofs_+0
rsr.ACCHI \at1 // MAC16 option
s32i \at1, \ptr, .Lxchal_ofs_+4
.set .Lxchal_ofs_, .Lxchal_ofs_ + 8
.elseif ((XTHAL_SAS_OPT | XTHAL_SAS_CC | XTHAL_SAS_CALR) & ~(\alloc)) == 0
xchal_sa_align \ptr, 0, 1016, 4, 4
.set .Lxchal_ofs_, .Lxchal_ofs_ + 8
.endif
// Optional caller-saved registers not used by default by the compiler:
.ifeq (XTHAL_SAS_OPT | XTHAL_SAS_NOCC | XTHAL_SAS_CALR) & ~(\select)
xchal_sa_align \ptr, 0, 1000, 4, 4
rsr.BR \at1 // boolean option
s32i \at1, \ptr, .Lxchal_ofs_+0
rsr.SCOMPARE1 \at1 // conditional store option
s32i \at1, \ptr, .Lxchal_ofs_+4
rsr.M0 \at1 // MAC16 option
s32i \at1, \ptr, .Lxchal_ofs_+8
rsr.M1 \at1 // MAC16 option
s32i \at1, \ptr, .Lxchal_ofs_+12
rsr.M2 \at1 // MAC16 option
s32i \at1, \ptr, .Lxchal_ofs_+16
rsr.M3 \at1 // MAC16 option
s32i \at1, \ptr, .Lxchal_ofs_+20
.set .Lxchal_ofs_, .Lxchal_ofs_ + 24
.elseif ((XTHAL_SAS_OPT | XTHAL_SAS_NOCC | XTHAL_SAS_CALR) & ~(\alloc)) == 0
xchal_sa_align \ptr, 0, 1000, 4, 4
.set .Lxchal_ofs_, .Lxchal_ofs_ + 24
.endif
.endm // xchal_ncp_store
/*
* Macro to load all non-coprocessor (extra) custom TIE and optional state
* (not including zero-overhead loop registers).
* Required parameters:
* ptr Save area pointer address register (clobbered)
* (register must contain a 4 byte aligned address).
* at1..at4 Four temporary address registers (first XCHAL_NCP_NUM_ATMPS
* registers are clobbered, the remaining are unused).
* Optional parameters:
* continue If macro invoked as part of a larger load sequence, set to 1
* if this is not the first in the sequence. Defaults to 0.
* ofs Offset from start of larger sequence (from value of first ptr
* in sequence) at which to load. Defaults to next available space
* (or 0 if <continue> is 0).
* select Select what category(ies) of registers to load, as a bitmask
* (see XTHAL_SAS_xxx constants). Defaults to all registers.
* alloc Select what category(ies) of registers to allocate; if any
* category is selected here that is not in <select>, space for
* the corresponding registers is skipped without doing any load.
*/
.macro xchal_ncp_load ptr at1 at2 at3 at4 continue=0 ofs=-1 select=XTHAL_SAS_ALL alloc=0
xchal_sa_start \continue, \ofs
// Optional global registers used by default by the compiler:
.ifeq (XTHAL_SAS_OPT | XTHAL_SAS_CC | XTHAL_SAS_GLOB) & ~(\select)
xchal_sa_align \ptr, 0, 1020, 4, 4
l32i \at1, \ptr, .Lxchal_ofs_+0
wur.THREADPTR \at1 // threadptr option
.set .Lxchal_ofs_, .Lxchal_ofs_ + 4
.elseif ((XTHAL_SAS_OPT | XTHAL_SAS_CC | XTHAL_SAS_GLOB) & ~(\alloc)) == 0
xchal_sa_align \ptr, 0, 1020, 4, 4
.set .Lxchal_ofs_, .Lxchal_ofs_ + 4
.endif
// Optional caller-saved registers used by default by the compiler:
.ifeq (XTHAL_SAS_OPT | XTHAL_SAS_CC | XTHAL_SAS_CALR) & ~(\select)
xchal_sa_align \ptr, 0, 1016, 4, 4
l32i \at1, \ptr, .Lxchal_ofs_+0
wsr.ACCLO \at1 // MAC16 option
l32i \at1, \ptr, .Lxchal_ofs_+4
wsr.ACCHI \at1 // MAC16 option
.set .Lxchal_ofs_, .Lxchal_ofs_ + 8
.elseif ((XTHAL_SAS_OPT | XTHAL_SAS_CC | XTHAL_SAS_CALR) & ~(\alloc)) == 0
xchal_sa_align \ptr, 0, 1016, 4, 4
.set .Lxchal_ofs_, .Lxchal_ofs_ + 8
.endif
// Optional caller-saved registers not used by default by the compiler:
.ifeq (XTHAL_SAS_OPT | XTHAL_SAS_NOCC | XTHAL_SAS_CALR) & ~(\select)
xchal_sa_align \ptr, 0, 1000, 4, 4
l32i \at1, \ptr, .Lxchal_ofs_+0
wsr.BR \at1 // boolean option
l32i \at1, \ptr, .Lxchal_ofs_+4
wsr.SCOMPARE1 \at1 // conditional store option
l32i \at1, \ptr, .Lxchal_ofs_+8
wsr.M0 \at1 // MAC16 option
l32i \at1, \ptr, .Lxchal_ofs_+12
wsr.M1 \at1 // MAC16 option
l32i \at1, \ptr, .Lxchal_ofs_+16
wsr.M2 \at1 // MAC16 option
l32i \at1, \ptr, .Lxchal_ofs_+20
wsr.M3 \at1 // MAC16 option
.set .Lxchal_ofs_, .Lxchal_ofs_ + 24
.elseif ((XTHAL_SAS_OPT | XTHAL_SAS_NOCC | XTHAL_SAS_CALR) & ~(\alloc)) == 0
xchal_sa_align \ptr, 0, 1000, 4, 4
.set .Lxchal_ofs_, .Lxchal_ofs_ + 24
.endif
.endm // xchal_ncp_load
#define XCHAL_NCP_NUM_ATMPS 1
#define XCHAL_SA_NUM_ATMPS 1
#endif /*_XTENSA_CORE_TIE_ASM_H*/
/*
* tie.h -- compile-time HAL definitions dependent on CORE & TIE configuration
*
* NOTE: This header file is not meant to be included directly.
*/
/* This header file describes this specific Xtensa processor's TIE extensions
that extend basic Xtensa core functionality. It is customized to this
Xtensa processor configuration.
Copyright (c) 1999-2015 Cadence Design Systems Inc.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#ifndef _XTENSA_CORE_TIE_H
#define _XTENSA_CORE_TIE_H
#define XCHAL_CP_NUM 1 /* number of coprocessors */
#define XCHAL_CP_MAX 8 /* max CP ID + 1 (0 if none) */
#define XCHAL_CP_MASK 0x80 /* bitmask of all CPs by ID */
#define XCHAL_CP_PORT_MASK 0x80 /* bitmask of only port CPs */
/* Basic parameters of each coprocessor: */
#define XCHAL_CP7_NAME "XTIOP"
#define XCHAL_CP7_IDENT XTIOP
#define XCHAL_CP7_SA_SIZE 0 /* size of state save area */
#define XCHAL_CP7_SA_ALIGN 1 /* min alignment of save area */
#define XCHAL_CP_ID_XTIOP 7 /* coprocessor ID (0..7) */
/* Filler info for unassigned coprocessors, to simplify arrays etc: */
#define XCHAL_CP0_SA_SIZE 0
#define XCHAL_CP0_SA_ALIGN 1
#define XCHAL_CP1_SA_SIZE 0
#define XCHAL_CP1_SA_ALIGN 1
#define XCHAL_CP2_SA_SIZE 0
#define XCHAL_CP2_SA_ALIGN 1
#define XCHAL_CP3_SA_SIZE 0
#define XCHAL_CP3_SA_ALIGN 1
#define XCHAL_CP4_SA_SIZE 0
#define XCHAL_CP4_SA_ALIGN 1
#define XCHAL_CP5_SA_SIZE 0
#define XCHAL_CP5_SA_ALIGN 1
#define XCHAL_CP6_SA_SIZE 0
#define XCHAL_CP6_SA_ALIGN 1
/* Save area for non-coprocessor optional and custom (TIE) state: */
#define XCHAL_NCP_SA_SIZE 36
#define XCHAL_NCP_SA_ALIGN 4
/* Total save area for optional and custom state (NCP + CPn): */
#define XCHAL_TOTAL_SA_SIZE 48 /* with 16-byte align padding */
#define XCHAL_TOTAL_SA_ALIGN 4 /* actual minimum alignment */
/*
* Detailed contents of save areas.
* NOTE: caller must define the XCHAL_SA_REG macro (not defined here)
* before expanding the XCHAL_xxx_SA_LIST() macros.
*
* XCHAL_SA_REG(s,ccused,abikind,kind,opt,name,galign,align,asize,
* dbnum,base,regnum,bitsz,gapsz,reset,x...)
*
* s = passed from XCHAL_*_LIST(s), eg. to select how to expand
* ccused = set if used by compiler without special options or code
* abikind = 0 (caller-saved), 1 (callee-saved), or 2 (thread-global)
* kind = 0 (special reg), 1 (TIE user reg), or 2 (TIE regfile reg)
* opt = 0 (custom TIE extension or coprocessor), or 1 (optional reg)
* name = lowercase reg name (no quotes)
* galign = group byte alignment (power of 2) (galign >= align)
* align = register byte alignment (power of 2)
* asize = allocated size in bytes (asize*8 == bitsz + gapsz + padsz)
* (not including any pad bytes required to galign this or next reg)
* dbnum = unique target number f/debug (see <xtensa-libdb-macros.h>)
* base = reg shortname w/o index (or sr=special, ur=TIE user reg)
* regnum = reg index in regfile, or special/TIE-user reg number
* bitsz = number of significant bits (regfile width, or ur/sr mask bits)
* gapsz = intervening bits, if bitsz bits not stored contiguously
* (padsz = pad bits at end [TIE regfile] or at msbits [ur,sr] of asize)
* reset = register reset value (or 0 if undefined at reset)
* x = reserved for future use (0 until then)
*
* To filter out certain registers, e.g. to expand only the non-global
* registers used by the compiler, you can do something like this:
*
* #define XCHAL_SA_REG(s,ccused,p...) SELCC##ccused(p)
* #define SELCC0(p...)
* #define SELCC1(abikind,p...) SELAK##abikind(p)
* #define SELAK0(p...) REG(p)
* #define SELAK1(p...) REG(p)
* #define SELAK2(p...)
* #define REG(kind,tie,name,galn,aln,asz,csz,dbnum,base,rnum,bsz,rst,x...) \
* ...what you want to expand...
*/
#define XCHAL_NCP_SA_NUM 9
#define XCHAL_NCP_SA_LIST(s) \
XCHAL_SA_REG(s,1,2,1,1, threadptr, 4, 4, 4,0x03E7, ur,231, 32,0,0,0) \
XCHAL_SA_REG(s,1,0,0,1, acclo, 4, 4, 4,0x0210, sr,16 , 32,0,0,0) \
XCHAL_SA_REG(s,1,0,0,1, acchi, 4, 4, 4,0x0211, sr,17 , 8,0,0,0) \
XCHAL_SA_REG(s,0,0,0,1, br, 4, 4, 4,0x0204, sr,4 , 16,0,0,0) \
XCHAL_SA_REG(s,0,0,0,1, scompare1, 4, 4, 4,0x020C, sr,12 , 32,0,0,0) \
XCHAL_SA_REG(s,0,0,0,1, m0, 4, 4, 4,0x0220, sr,32 , 32,0,0,0) \
XCHAL_SA_REG(s,0,0,0,1, m1, 4, 4, 4,0x0221, sr,33 , 32,0,0,0) \
XCHAL_SA_REG(s,0,0,0,1, m2, 4, 4, 4,0x0222, sr,34 , 32,0,0,0) \
XCHAL_SA_REG(s,0,0,0,1, m3, 4, 4, 4,0x0223, sr,35 , 32,0,0,0)
#define XCHAL_CP0_SA_NUM 0
#define XCHAL_CP0_SA_LIST(s) /* empty */
#define XCHAL_CP1_SA_NUM 0
#define XCHAL_CP1_SA_LIST(s) /* empty */
#define XCHAL_CP2_SA_NUM 0
#define XCHAL_CP2_SA_LIST(s) /* empty */
#define XCHAL_CP3_SA_NUM 0
#define XCHAL_CP3_SA_LIST(s) /* empty */
#define XCHAL_CP4_SA_NUM 0
#define XCHAL_CP4_SA_LIST(s) /* empty */
#define XCHAL_CP5_SA_NUM 0
#define XCHAL_CP5_SA_LIST(s) /* empty */
#define XCHAL_CP6_SA_NUM 0
#define XCHAL_CP6_SA_LIST(s) /* empty */
#define XCHAL_CP7_SA_NUM 0
#define XCHAL_CP7_SA_LIST(s) /* empty */
/* Byte length of instruction from its first nibble (op0 field), per FLIX. */
#define XCHAL_OP0_FORMAT_LENGTHS 3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3
/* Byte length of instruction from its first byte, per FLIX. */
#define XCHAL_BYTE0_FORMAT_LENGTHS \
3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3, 3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3,\
3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3, 3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3,\
3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3, 3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3,\
3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3, 3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3,\
3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3, 3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3,\
3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3, 3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3,\
3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3, 3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3,\
3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3, 3,3,3,3,3,3,3,3,2,2,2,2,2,2,3,3
#endif /*_XTENSA_CORE_TIE_H*/
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment