Commit 8fd5e7a2 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'metag-v3.9-rc1-v4' of git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/metag

Pull new ImgTec Meta architecture from James Hogan:
 "This adds core architecture support for Imagination's Meta processor
  cores, followed by some later miscellaneous arch/metag cleanups and
  fixes which I kept separate to ease review:

   - Support for basic Meta 1 (ATP) and Meta 2 (HTP) core architecture
   - A few fixes all over, particularly for symbol prefixes
   - A few privilege protection fixes
   - Several cleanups (setup.c includes, split out a lot of
     metag_ksyms.c)
   - Fix some missing exports
   - Convert hugetlb to use vm_unmapped_area()
   - Copy device tree to non-init memory
   - Provide dma_get_sgtable()"

* tag 'metag-v3.9-rc1-v4' of git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/metag: (61 commits)
  metag: Provide dma_get_sgtable()
  metag: prom.h: remove declaration of metag_dt_memblock_reserve()
  metag: copy devicetree to non-init memory
  metag: cleanup metag_ksyms.c includes
  metag: move mm/init.c exports out of metag_ksyms.c
  metag: move usercopy.c exports out of metag_ksyms.c
  metag: move setup.c exports out of metag_ksyms.c
  metag: move kick.c exports out of metag_ksyms.c
  metag: move traps.c exports out of metag_ksyms.c
  metag: move irq enable out of irqflags.h on SMP
  genksyms: fix metag symbol prefix on crc symbols
  metag: hugetlb: convert to vm_unmapped_area()
  metag: export clear_page and copy_page
  metag: export metag_code_cache_flush_all
  metag: protect more non-MMU memory regions
  metag: make TXPRIVEXT bits explicit
  metag: kernel/setup.c: sort includes
  perf: Enable building perf tools for Meta
  metag: add boot time LNKGET/LNKSET check
  metag: add __init to metag_cache_probe()
  ...
parents 529e5fbc c60ac315
...@@ -299,6 +299,8 @@ memory-hotplug.txt ...@@ -299,6 +299,8 @@ memory-hotplug.txt
- Hotpluggable memory support, how to use and current status. - Hotpluggable memory support, how to use and current status.
memory.txt memory.txt
- info on typical Linux memory problems. - info on typical Linux memory problems.
metag/
- directory with info about Linux on Meta architecture.
mips/ mips/
- directory with info about Linux on MIPS architecture. - directory with info about Linux on MIPS architecture.
misc-devices/ misc-devices/
......
* Meta External Trigger Controller Binding
This binding specifies what properties must be available in the device tree
representation of a Meta external trigger controller.
Required properties:
- compatible: Specifies the compatibility list for the interrupt controller.
The type shall be <string> and the value shall include "img,meta-intc".
- num-banks: Specifies the number of interrupt banks (each of which can
handle 32 interrupt sources).
- interrupt-controller: The presence of this property identifies the node
as an interupt controller. No property value shall be defined.
- #interrupt-cells: Specifies the number of cells needed to encode an
interrupt source. The type shall be a <u32> and the value shall be 2.
- #address-cells: Specifies the number of cells needed to encode an
address. The type shall be <u32> and the value shall be 0. As such,
'interrupt-map' nodes do not have to specify a parent unit address.
Optional properties:
- no-mask: The controller doesn't have any mask registers.
* Interrupt Specifier Definition
Interrupt specifiers consists of 2 cells encoded as follows:
- <1st-cell>: The interrupt-number that identifies the interrupt source.
- <2nd-cell>: The Linux interrupt flags containing level-sense information,
encoded as follows:
1 = edge triggered
4 = level-sensitive
* Examples
Example 1:
/*
* Meta external trigger block
*/
intc: intc {
// This is an interrupt controller node.
interrupt-controller;
// No address cells so that 'interrupt-map' nodes which
// reference this interrupt controller node do not need a parent
// address specifier.
#address-cells = <0>;
// Two cells to encode interrupt sources.
#interrupt-cells = <2>;
// Number of interrupt banks
num-banks = <2>;
// No HWMASKEXT is available (specify on Chorus2 and Comet ES1)
no-mask;
// Compatible with Meta hardware trigger block.
compatible = "img,meta-intc";
};
Example 2:
/*
* An interrupt generating device that is wired to a Meta external
* trigger block.
*/
uart1: uart@0x02004c00 {
// Interrupt source '5' that is level-sensitive.
// Note that there are only two cells as specified in the
// interrupt parent's '#interrupt-cells' property.
interrupts = <5 4 /* level */>;
// The interrupt controller that this device is wired to.
interrupt-parent = <&intc>;
};
...@@ -978,6 +978,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -978,6 +978,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
If specified, z/VM IUCV HVC accepts connections If specified, z/VM IUCV HVC accepts connections
from listed z/VM user IDs only. from listed z/VM user IDs only.
hwthread_map= [METAG] Comma-separated list of Linux cpu id to
hardware thread id mappings.
Format: <cpu>:<hwthread>
keep_bootcon [KNL] keep_bootcon [KNL]
Do not unregister boot console at start. This is only Do not unregister boot console at start. This is only
useful for debugging when something happens in the window useful for debugging when something happens in the window
......
00-INDEX
- this file
kernel-ABI.txt
- Documents metag ABI details
==========================
KERNEL ABIS FOR METAG ARCH
==========================
This document describes the Linux ABIs for the metag architecture, and has the
following sections:
(*) Outline of registers
(*) Userland registers
(*) Kernel registers
(*) System call ABI
(*) Calling conventions
====================
OUTLINE OF REGISTERS
====================
The main Meta core registers are arranged in units:
UNIT Type DESCRIPTION GP EXT PRIV GLOBAL
======= ======= =============== ======= ======= ======= =======
CT Special Control unit
D0 General Data unit 0 0-7 8-15 16-31 16-31
D1 General Data unit 1 0-7 8-15 16-31 16-31
A0 General Address unit 0 0-3 4-7 8-15 8-15
A1 General Address unit 1 0-3 4-7 8-15 8-15
PC Special PC unit 0 1
PORT Special Ports
TR Special Trigger unit 0-7
TT Special Trace unit 0-5
FX General FP unit 0-15
GP registers form part of the main context.
Extended context registers (EXT) may not be present on all hardware threads and
can be context switched if support is enabled and the appropriate bits are set
in e.g. the D0.8 register to indicate what extended state to preserve.
Global registers are shared between threads and are privilege protected.
See arch/metag/include/asm/metag_regs.h for definitions relating to core
registers and the fields and bits they contain. See the TRMs for further details
about special registers.
Several special registers are preserved in the main context, these are the
interesting ones:
REG (ALIAS) PURPOSE
======================= ===============================================
CT.1 (TXMODE) Processor mode bits (particularly for DSP)
CT.2 (TXSTATUS) Condition flags and LSM_STEP (MGET/MSET step)
CT.3 (TXRPT) Branch repeat counter
PC.0 (PC) Program counter
Some of the general registers have special purposes in the ABI and therefore
have aliases:
D0 REG (ALIAS) PURPOSE D1 REG (ALIAS) PURPOSE
=============== =============== =============== =======================
D0.0 (D0Re0) 32bit result D1.0 (D1Re0) Top half of 64bit result
D0.1 (D0Ar6) Argument 6 D1.1 (D1Ar5) Argument 5
D0.2 (D0Ar4) Argument 4 D1.2 (D1Ar3) Argument 3
D0.3 (D0Ar2) Argument 2 D1.3 (D1Ar1) Argument 1
D0.4 (D0FrT) Frame temp D1.4 (D1RtP) Return pointer
D0.5 Call preserved D1.5 Call preserved
D0.6 Call preserved D1.6 Call preserved
D0.7 Call preserved D1.7 Call preserved
A0 REG (ALIAS) PURPOSE A1 REG (ALIAS) PURPOSE
=============== =============== =============== =======================
A0.0 (A0StP) Stack pointer A1.0 (A1GbP) Global base pointer
A0.1 (A0FrP) Frame pointer A1.1 (A1LbP) Local base pointer
A0.2 A1.2
A0.3 A1.3
==================
USERLAND REGISTERS
==================
All the general purpose D0, D1, A0, A1 registers are preserved when entering the
kernel (including asynchronous events such as interrupts and timer ticks) except
the following which have special purposes in the ABI:
REGISTERS WHEN STATUS PURPOSE
=============== ======= =============== ===============================
D0.8 DSP Preserved ECH, determines what extended
DSP state to preserve.
A0.0 (A0StP) ALWAYS Preserved Stack >= A0StP may be clobbered
at any time by the creation of a
signal frame.
A1.0 (A1GbP) SMP Clobbered Used as temporary for loading
kernel stack pointer and saving
core context.
A0.15 !SMP Protected Stores kernel stack pointer.
A1.15 ALWAYS Protected Stores kernel base pointer.
On UP A0.15 is used to store the kernel stack pointer for storing the userland
context. A0.15 is global between hardware threads though which means it cannot
be used on SMP for this purpose. Since no protected local registers are
available A1GbP is reserved for use as a temporary to allow a percpu stack
pointer to be loaded for storing the rest of the context.
================
KERNEL REGISTERS
================
When in the kernel the following registers have special purposes in the ABI:
REGISTERS WHEN STATUS PURPOSE
=============== ======= =============== ===============================
A0.0 (A0StP) ALWAYS Preserved Stack >= A0StP may be clobbered
at any time by the creation of
an irq signal frame.
A1.0 (A1GbP) ALWAYS Preserved Reserved (kernel base pointer).
===============
SYSTEM CALL ABI
===============
When a system call is made, the following registers are effective:
REGISTERS CALL RETURN
=============== ======================= ===============================
D0.0 (D0Re0) Return value (or -errno)
D1.0 (D1Re0) System call number Clobbered
D0.1 (D0Ar6) Syscall arg #6 Preserved
D1.1 (D1Ar5) Syscall arg #5 Preserved
D0.2 (D0Ar4) Syscall arg #4 Preserved
D1.2 (D1Ar3) Syscall arg #3 Preserved
D0.3 (D0Ar2) Syscall arg #2 Preserved
D1.3 (D1Ar1) Syscall arg #1 Preserved
Due to the limited number of argument registers and some system calls with badly
aligned 64-bit arguments, 64-bit values are always packed in consecutive
arguments, even if this is contrary to the normal calling conventions (where the
two halves would go in a matching pair of data registers).
For example fadvise64_64 usually has the signature:
long sys_fadvise64_64(i32 fd, i64 offs, i64 len, i32 advice);
But for metag fadvise64_64 is wrapped so that the 64-bit arguments are packed:
long sys_fadvise64_64_metag(i32 fd, i32 offs_lo,
i32 offs_hi, i32 len_lo,
i32 len_hi, i32 advice)
So the arguments are packed in the registers like this:
D0 REG (ALIAS) VALUE D1 REG (ALIAS) VALUE
=============== =============== =============== =======================
D0.1 (D0Ar6) advice D1.1 (D1Ar5) hi(len)
D0.2 (D0Ar4) lo(len) D1.2 (D1Ar3) hi(offs)
D0.3 (D0Ar2) lo(offs) D1.3 (D1Ar1) fd
===================
CALLING CONVENTIONS
===================
These calling conventions apply to both user and kernel code. The stack grows
from low addresses to high addresses in the metag ABI. The stack pointer (A0StP)
should always point to the next free address on the stack and should at all
times be 64-bit aligned. The following registers are effective at the point of a
call:
REGISTERS CALL RETURN
=============== ======================= ===============================
D0.0 (D0Re0) 32bit return value
D1.0 (D1Re0) Upper half of 64bit return value
D0.1 (D0Ar6) 32bit argument #6 Clobbered
D1.1 (D1Ar5) 32bit argument #5 Clobbered
D0.2 (D0Ar4) 32bit argument #4 Clobbered
D1.2 (D1Ar3) 32bit argument #3 Clobbered
D0.3 (D0Ar2) 32bit argument #2 Clobbered
D1.3 (D1Ar1) 32bit argument #1 Clobbered
D0.4 (D0FrT) Clobbered
D1.4 (D1RtP) Return pointer Clobbered
D{0-1}.{5-7} Preserved
A0.0 (A0StP) Stack pointer Preserved
A1.0 (A0GbP) Preserved
A0.1 (A0FrP) Frame pointer Preserved
A1.1 (A0LbP) Preserved
A{0-1},{2-3} Clobbered
64-bit arguments are placed in matching pairs of registers (i.e. the same
register number in both D0 and D1 units), with the least significant half in D0
and the most significant half in D1, leaving a gap where necessary. Futher
arguments are stored on the stack in reverse order (earlier arguments at higher
addresses):
ADDRESS 0 1 2 3 4 5 6 7
=============== ===== ===== ===== ===== ===== ===== ===== =====
A0StP -->
A0StP-0x08 32bit argument #8 32bit argument #7
A0StP-0x10 32bit argument #10 32bit argument #9
Function prologues tend to look a bit like this:
/* If frame pointer in use, move it to frame temp register so it can be
easily pushed onto stack */
MOV D0FrT,A0FrP
/* If frame pointer in use, set it to stack pointer */
ADD A0FrP,A0StP,#0
/* Preserve D0FrT, D1RtP, D{0-1}.{5-7} on stack, incrementing A0StP */
MSETL [A0StP++],D0FrT,D0.5,D0.6,D0.7
/* Allocate some stack space for local variables */
ADD A0StP,A0StP,#0x10
At this point the stack would look like this:
ADDRESS 0 1 2 3 4 5 6 7
=============== ===== ===== ===== ===== ===== ===== ===== =====
A0StP -->
A0StP-0x08
A0StP-0x10
A0StP-0x18 Old D0.7 Old D1.7
A0StP-0x20 Old D0.6 Old D1.6
A0StP-0x28 Old D0.5 Old D1.5
A0FrP --> Old A0FrP (frame ptr) Old D1RtP (return ptr)
A0FrP-0x08 32bit argument #8 32bit argument #7
A0FrP-0x10 32bit argument #10 32bit argument #9
Function epilogues tend to differ depending on the use of a frame pointer. An
example of a frame pointer epilogue:
/* Restore D0FrT, D1RtP, D{0-1}.{5-7} from stack, incrementing A0FrP */
MGETL D0FrT,D0.5,D0.6,D0.7,[A0FrP++]
/* Restore stack pointer to where frame pointer was before increment */
SUB A0StP,A0FrP,#0x20
/* Restore frame pointer from frame temp */
MOV A0FrP,D0FrT
/* Return to caller via restored return pointer */
MOV PC,D1RtP
If the function hasn't touched the frame pointer, MGETL cannot be safely used
with A0StP as it always increments and that would expose the stack to clobbering
by interrupts (kernel) or signals (user). Therefore it's common to see the MGETL
split into separate GETL instructions:
/* Restore D0FrT, D1RtP, D{0-1}.{5-7} from stack */
GETL D0FrT,D1RtP,[A0StP+#-0x30]
GETL D0.5,D1.5,[A0StP+#-0x28]
GETL D0.6,D1.6,[A0StP+#-0x20]
GETL D0.7,D1.7,[A0StP+#-0x18]
/* Restore stack pointer */
SUB A0StP,A0StP,#0x30
/* Return to caller via restored return pointer */
MOV PC,D1RtP
...@@ -5204,6 +5204,18 @@ F: drivers/mtd/ ...@@ -5204,6 +5204,18 @@ F: drivers/mtd/
F: include/linux/mtd/ F: include/linux/mtd/
F: include/uapi/mtd/ F: include/uapi/mtd/
METAG ARCHITECTURE
M: James Hogan <james.hogan@imgtec.com>
S: Supported
F: arch/metag/
F: Documentation/metag/
F: Documentation/devicetree/bindings/metag/
F: drivers/clocksource/metag_generic.c
F: drivers/irqchip/irq-metag.c
F: drivers/irqchip/irq-metag-ext.c
F: drivers/tty/metag_da.c
F: fs/imgdafs/
MICROBLAZE ARCHITECTURE MICROBLAZE ARCHITECTURE
M: Michal Simek <monstr@monstr.eu> M: Michal Simek <monstr@monstr.eu>
L: microblaze-uclinux@itee.uq.edu.au (moderated for non-subscribers) L: microblaze-uclinux@itee.uq.edu.au (moderated for non-subscribers)
......
...@@ -103,6 +103,22 @@ config UPROBES ...@@ -103,6 +103,22 @@ config UPROBES
If in doubt, say "N". If in doubt, say "N".
config HAVE_64BIT_ALIGNED_ACCESS
def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS
help
Some architectures require 64 bit accesses to be 64 bit
aligned, which also requires structs containing 64 bit values
to be 64 bit aligned too. This includes some 32 bit
architectures which can do 64 bit accesses, as well as 64 bit
architectures without unaligned access.
This symbol should be selected by an architecture if 64 bit
accesses are required to be 64 bit aligned in this way even
though it is not a 64 bit architecture.
See Documentation/unaligned-memory-access.txt for more
information on the topic of unaligned memory accesses.
config HAVE_EFFICIENT_UNALIGNED_ACCESS config HAVE_EFFICIENT_UNALIGNED_ACCESS
bool bool
help help
......
config SYMBOL_PREFIX
string
default "_"
config METAG
def_bool y
select EMBEDDED
select GENERIC_ATOMIC64
select GENERIC_CLOCKEVENTS
select GENERIC_IRQ_SHOW
select GENERIC_SMP_IDLE_THREAD
select HAVE_64BIT_ALIGNED_ACCESS
select HAVE_ARCH_TRACEHOOK
select HAVE_C_RECORDMCOUNT
select HAVE_DEBUG_KMEMLEAK
select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_TRACE_MCOUNT_TEST
select HAVE_GENERIC_HARDIRQS
select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_LZO
select HAVE_KERNEL_XZ
select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_PERF_EVENTS
select HAVE_SYSCALL_TRACEPOINTS
select IRQ_DOMAIN
select MODULES_USE_ELF_RELA
select OF
select OF_EARLY_FLATTREE
select SPARSE_IRQ
config STACKTRACE_SUPPORT
def_bool y
config LOCKDEP_SUPPORT
def_bool y
config HAVE_LATENCYTOP_SUPPORT
def_bool y
config RWSEM_GENERIC_SPINLOCK
def_bool y
config RWSEM_XCHGADD_ALGORITHM
bool
config GENERIC_HWEIGHT
def_bool y
config GENERIC_CALIBRATE_DELAY
def_bool y
config GENERIC_GPIO
def_bool n
config NO_IOPORT
def_bool y
source "init/Kconfig"
source "kernel/Kconfig.freezer"
menu "Processor type and features"
config MMU
def_bool y
config STACK_GROWSUP
def_bool y
config HOTPLUG_CPU
bool "Enable CPU hotplug support"
depends on SMP
help
Say Y here to allow turning CPUs off and on. CPUs can be
controlled through /sys/devices/system/cpu.
Say N if you want to disable CPU hotplug.
config HIGHMEM
bool "High Memory Support"
help
The address space of Meta processors is only 4 Gigabytes large
and it has to accommodate user address space, kernel address
space as well as some memory mapped IO. That means that, if you
have a large amount of physical memory and/or IO, not all of the
memory can be "permanently mapped" by the kernel. The physical
memory that is not permanently mapped is called "high memory".
Depending on the selected kernel/user memory split, minimum
vmalloc space and actual amount of RAM, you may not need this
option which should result in a slightly faster kernel.
If unsure, say n.
source "arch/metag/mm/Kconfig"
source "arch/metag/Kconfig.soc"
config METAG_META12
bool
help
Select this from the SoC config symbol to indicate that it contains a
Meta 1.2 core.
config METAG_META21
bool
help
Select this from the SoC config symbol to indicate that it contains a
Meta 2.1 core.
config SMP
bool "Symmetric multi-processing support"
depends on METAG_META21 && METAG_META21_MMU
select USE_GENERIC_SMP_HELPERS
help
This enables support for systems with more than one thread running
Linux. If you have a system with only one thread running Linux,
say N. Otherwise, say Y.
config NR_CPUS
int "Maximum number of CPUs (2-4)" if SMP
range 2 4 if SMP
default "1" if !SMP
default "4" if SMP
config METAG_SMP_WRITE_REORDERING
bool
help
This attempts to prevent cache-memory incoherence due to external
reordering of writes from different hardware threads when SMP is
enabled. It adds fences (system event 0) to smp_mb and smp_rmb in an
attempt to catch some of the cases, and also before writes to shared
memory in LOCK1 protected atomics and spinlocks.
This will not completely prevent cache incoherency on affected cores.
config METAG_LNKGET_AROUND_CACHE
bool
depends on METAG_META21
help
This indicates that the LNKGET/LNKSET instructions go around the
cache, which requires some extra cache flushes when the memory needs
to be accessed by normal GET/SET instructions too.
choice
prompt "Atomicity primitive"
default METAG_ATOMICITY_LNKGET
help
This option selects the mechanism for performing atomic operations.
config METAG_ATOMICITY_IRQSOFF
depends on !SMP
bool "irqsoff"
help
This option disables interrupts to achieve atomicity. This mechanism
is not SMP-safe.
config METAG_ATOMICITY_LNKGET
depends on METAG_META21
bool "lnkget/lnkset"
help
This option uses the LNKGET and LNKSET instructions to achieve
atomicity. LNKGET/LNKSET are load-link/store-conditional instructions.
Choose this option if your system requires low latency.
config METAG_ATOMICITY_LOCK1
depends on SMP
bool "lock1"
help
This option uses the LOCK1 instruction for atomicity. This is mainly
provided as a debugging aid if the lnkget/lnkset atomicity primitive
isn't working properly.
endchoice
config METAG_FPU
bool "FPU Support"
depends on METAG_META21
default y
help
This option allows processes to use FPU hardware available with this
CPU. If this option is not enabled FPU registers will not be saved
and restored on context-switch.
If you plan on running programs which are compiled to use hard floats
say Y here.
config METAG_DSP
bool "DSP Support"
help
This option allows processes to use DSP hardware available
with this CPU. If this option is not enabled DSP registers
will not be saved and restored on context-switch.
If you plan on running DSP programs say Y here.
config METAG_PERFCOUNTER_IRQS
bool "PerfCounters interrupt support"
depends on METAG_META21
help
This option enables using interrupts to collect information from
Performance Counters. This option is supported in new META21
(starting from HTP265).
When disabled, Performance Counters information will be collected
based on Timer Interrupt.
config METAG_DA
bool "DA support"
help
Say Y if you plan to use a DA debug adapter with Linux. The presence
of the DA will be detected automatically at boot, so it is safe to say
Y to this option even when booting without a DA.
This enables support for services provided by DA JTAG debug adapters,
such as:
- communication over DA channels (such as the console driver).
- use of the DA filesystem.
menu "Boot options"
config METAG_BUILTIN_DTB
bool "Embed DTB in kernel image"
default y
help
Embeds a device tree binary in the kernel image.
config METAG_BUILTIN_DTB_NAME
string "Built in DTB"
depends on METAG_BUILTIN_DTB
help
Set the name of the DTB to embed (leave blank to pick one
automatically based on kernel configuration).
config CMDLINE_BOOL
bool "Default bootloader kernel arguments"
config CMDLINE
string "Kernel command line"
depends on CMDLINE_BOOL
help
On some architectures there is currently no way for the boot loader
to pass arguments to the kernel. For these architectures, you should
supply some command-line options at build time by entering them
here.
config CMDLINE_FORCE
bool "Force default kernel command string"
depends on CMDLINE_BOOL
help
Set this to have arguments from the default kernel command string
override those passed by the boot loader.
endmenu
source "kernel/Kconfig.preempt"
source kernel/Kconfig.hz
endmenu
menu "Power management options"
source kernel/power/Kconfig
endmenu
menu "Executable file formats"
source "fs/Kconfig.binfmt"
endmenu
source "net/Kconfig"
source "drivers/Kconfig"
source "fs/Kconfig"
source "arch/metag/Kconfig.debug"
source "security/Kconfig"
source "crypto/Kconfig"
source "lib/Kconfig"
menu "Kernel hacking"
config TRACE_IRQFLAGS_SUPPORT
bool
default y
source "lib/Kconfig.debug"
config DEBUG_STACKOVERFLOW
bool "Check for stack overflows"
depends on DEBUG_KERNEL
help
This option will cause messages to be printed if free stack space
drops below a certain limit.
config 4KSTACKS
bool "Use 4Kb for kernel stacks instead of 8Kb"
depends on DEBUG_KERNEL
help
If you say Y here the kernel will use a 4Kb stacksize for the
kernel stack attached to each process/thread. This facilitates
running more threads on a system and also reduces the pressure
on the VM subsystem for higher order allocations. This option
will also use IRQ stacks to compensate for the reduced stackspace.
config METAG_FUNCTION_TRACE
bool "Output Meta real-time trace data for function entry/exit"
help
If you say Y here the kernel will use the Meta hardware trace
unit to output information about function entry and exit that
can be used by a debugger for profiling and call-graphs.
config METAG_POISON_CATCH_BUFFERS
bool "Poison catch buffer contents on kernel entry"
help
If you say Y here the kernel will write poison data to the
catch buffer registers on kernel entry. This will make any
problem with catch buffer handling much more apparent.
endmenu
choice
prompt "SoC Type"
default META21_FPGA
config META12_FPGA
bool "Meta 1.2 FPGA"
select METAG_META12
help
This is a Meta 1.2 FPGA bitstream, just a bare CPU.
config META21_FPGA
bool "Meta 2.1 FPGA"
select METAG_META21
help
This is a Meta 2.1 FPGA bitstream, just a bare CPU.
endchoice
menu "SoC configuration"
if METAG_META21
# Meta 2.x specific options
config METAG_META21_MMU
bool "Meta 2.x MMU mode"
default y
help
Use the Meta 2.x MMU in extended mode.
config METAG_UNALIGNED
bool "Meta 2.x unaligned access checking"
default y
help
All memory accesses will be checked for alignment and an exception
raised on unaligned accesses. This feature does cost performance
but without it there will be no notification of this type of error.
config METAG_USER_TCM
bool "Meta on-chip memory support for userland"
select GENERIC_ALLOCATOR
default y
help
Allow the on-chip memories of Meta SoCs to be used by user
applications.
endif
config METAG_HALT_ON_PANIC
bool "Halt the core on panic"
help
Halt the core when a panic occurs. This is useful when running
pre-production silicon or in an FPGA environment.
endmenu
#
# metag/Makefile
#
# This file is included by the global makefile so that you can add your own
# architecture-specific flags and dependencies. Remember to do have actions
# for "archclean" cleaning up for this architecture.
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# Copyright (C) 1994 by Linus Torvalds
# 2007,2008,2012 by Imagination Technologies Ltd.
#
LDFLAGS :=
OBJCOPYFLAGS := -O binary -R .note -R .comment -S
checkflags-$(CONFIG_METAG_META12) += -DMETAC_1_2
checkflags-$(CONFIG_METAG_META21) += -DMETAC_2_1
CHECKFLAGS += -D__metag__ $(checkflags-y)
KBUILD_DEFCONFIG := meta2_defconfig
sflags-$(CONFIG_METAG_META12) += -mmetac=1.2
ifeq ($(CONFIG_METAG_META12),y)
# Only use TBI API 1.4 if DSP is enabled for META12 cores
sflags-$(CONFIG_METAG_DSP) += -DTBI_1_4
endif
sflags-$(CONFIG_METAG_META21) += -mmetac=2.1 -DTBI_1_4
cflags-$(CONFIG_METAG_FUNCTION_TRACE) += -mhwtrace-leaf -mhwtrace-retpc
cflags-$(CONFIG_METAG_META21) += -mextensions=bex
KBUILD_CFLAGS += -pipe
KBUILD_CFLAGS += -ffunction-sections
KBUILD_CFLAGS += $(sflags-y) $(cflags-y)
KBUILD_AFLAGS += $(sflags-y)
LDFLAGS_vmlinux := $(ldflags-y)
head-y := arch/metag/kernel/head.o
core-y += arch/metag/boot/dts/
core-y += arch/metag/kernel/
core-y += arch/metag/mm/
libs-y += arch/metag/lib/
libs-y += arch/metag/tbx/
boot := arch/metag/boot
boot_targets += uImage
boot_targets += uImage.gz
boot_targets += uImage.bz2
boot_targets += uImage.xz
boot_targets += uImage.lzo
boot_targets += uImage.bin
boot_targets += vmlinux.bin
PHONY += $(boot_targets)
all: vmlinux.bin
$(boot_targets): vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
%.dtb %.dtb.S %.dtb.o: scripts
$(Q)$(MAKE) $(build)=$(boot)/dts $(boot)/dts/$@
dtbs: scripts
$(Q)$(MAKE) $(build)=$(boot)/dts dtbs
archclean:
$(Q)$(MAKE) $(clean)=$(boot)
define archhelp
echo '* vmlinux.bin - Binary kernel image (arch/$(ARCH)/boot/vmlinux.bin)'
@echo ' uImage - Alias to bootable U-Boot image'
@echo ' uImage.bin - Kernel-only image for U-Boot (bin)'
@echo ' uImage.gz - Kernel-only image for U-Boot (gzip)'
@echo ' uImage.bz2 - Kernel-only image for U-Boot (bzip2)'
@echo ' uImage.xz - Kernel-only image for U-Boot (xz)'
@echo ' uImage.lzo - Kernel-only image for U-Boot (lzo)'
@echo ' dtbs - Build device tree blobs for enabled boards'
endef
vmlinux*
uImage*
ramdisk.*
*.dtb
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# Copyright (C) 2007,2012 Imagination Technologies Ltd.
#
suffix-y := bin
suffix-$(CONFIG_KERNEL_GZIP) := gz
suffix-$(CONFIG_KERNEL_BZIP2) := bz2
suffix-$(CONFIG_KERNEL_XZ) := xz
suffix-$(CONFIG_KERNEL_LZO) := lzo
targets += vmlinux.bin
targets += uImage
targets += uImage.gz
targets += uImage.bz2
targets += uImage.xz
targets += uImage.lzo
targets += uImage.bin
extra-y += vmlinux.bin
extra-y += vmlinux.bin.gz
extra-y += vmlinux.bin.bz2
extra-y += vmlinux.bin.xz
extra-y += vmlinux.bin.lzo
UIMAGE_LOADADDR = $(CONFIG_PAGE_OFFSET)
ifeq ($(CONFIG_FUNCTION_TRACER),y)
orig_cflags := $(KBUILD_CFLAGS)
KBUILD_CFLAGS = $(subst -pg, , $(orig_cflags))
endif
$(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy)
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip)
$(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin FORCE
$(call if_changed,bzip2)
$(obj)/vmlinux.bin.xz: $(obj)/vmlinux.bin FORCE
$(call if_changed,xzkern)
$(obj)/vmlinux.bin.lzo: $(obj)/vmlinux.bin FORCE
$(call if_changed,lzo)
$(obj)/uImage.gz: $(obj)/vmlinux.bin.gz FORCE
$(call if_changed,uimage,gzip)
$(obj)/uImage.bz2: $(obj)/vmlinux.bin.bz2 FORCE
$(call if_changed,uimage,bzip2)
$(obj)/uImage.xz: $(obj)/vmlinux.bin.xz FORCE
$(call if_changed,uimage,xz)
$(obj)/uImage.lzo: $(obj)/vmlinux.bin.lzo FORCE
$(call if_changed,uimage,lzo)
$(obj)/uImage.bin: $(obj)/vmlinux.bin FORCE
$(call if_changed,uimage,none)
$(obj)/uImage: $(obj)/uImage.$(suffix-y)
@ln -sf $(notdir $<) $@
@echo ' Image $@ is ready'
dtb-y += skeleton.dtb
# Built-in dtb
builtindtb-y := skeleton
ifneq ($(CONFIG_METAG_BUILTIN_DTB_NAME),"")
builtindtb-y := $(CONFIG_METAG_BUILTIN_DTB_NAME)
endif
obj-$(CONFIG_METAG_BUILTIN_DTB) += $(patsubst "%",%,$(builtindtb-y)).dtb.o
targets += dtbs
targets += $(dtb-y)
dtbs: $(addprefix $(obj)/, $(dtb-y))
clean-files += *.dtb
/*
* Copyright (C) 2012 Imagination Technologies Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
/dts-v1/;
/include/ "skeleton.dtsi"
/*
* Skeleton device tree; the bare minimum needed to boot; just include and
* add a compatible value. The bootloader will typically populate the memory
* node.
*/
/ {
compatible = "img,meta";
#address-cells = <1>;
#size-cells = <1>;
chosen { };
aliases { };
memory { device_type = "memory"; reg = <0 0>; };
};
# CONFIG_LOCALVERSION_AUTO is not set
# CONFIG_SWAP is not set
CONFIG_LOG_BUF_SHIFT=13
CONFIG_SYSFS_DEPRECATED=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_KALLSYMS_ALL=y
# CONFIG_ELF_CORE is not set
CONFIG_SLAB=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
# CONFIG_MSDOS_PARTITION is not set
# CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set
CONFIG_FLATMEM_MANUAL=y
CONFIG_META12_FPGA=y
CONFIG_METAG_DA=y
CONFIG_HZ_100=y
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=1
CONFIG_BLK_DEV_RAM_SIZE=16384
# CONFIG_INPUT is not set
# CONFIG_SERIO is not set
# CONFIG_VT is not set
# CONFIG_LEGACY_PTYS is not set
CONFIG_DA_TTY=y
CONFIG_DA_CONSOLE=y
# CONFIG_DEVKMEM is not set
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
# CONFIG_USB_SUPPORT is not set
# CONFIG_DNOTIFY is not set
CONFIG_TMPFS=y
# CONFIG_MISC_FILESYSTEMS is not set
# CONFIG_SCHED_DEBUG is not set
CONFIG_DEBUG_INFO=y
# CONFIG_LOCALVERSION_AUTO is not set
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=13
CONFIG_SYSFS_DEPRECATED=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_KALLSYMS_ALL=y
# CONFIG_ELF_CORE is not set
CONFIG_SLAB=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
# CONFIG_MSDOS_PARTITION is not set
# CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set
CONFIG_METAG_L2C=y
CONFIG_FLATMEM_MANUAL=y
CONFIG_METAG_HALT_ON_PANIC=y
CONFIG_METAG_DA=y
CONFIG_HZ_100=y
CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=1
CONFIG_BLK_DEV_RAM_SIZE=16384
# CONFIG_INPUT is not set
# CONFIG_SERIO is not set
# CONFIG_VT is not set
# CONFIG_LEGACY_PTYS is not set
CONFIG_DA_TTY=y
CONFIG_DA_CONSOLE=y
# CONFIG_DEVKMEM is not set
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
# CONFIG_USB_SUPPORT is not set
# CONFIG_DNOTIFY is not set
CONFIG_TMPFS=y
# CONFIG_MISC_FILESYSTEMS is not set
# CONFIG_SCHED_DEBUG is not set
CONFIG_DEBUG_INFO=y
# CONFIG_LOCALVERSION_AUTO is not set
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=13
CONFIG_SYSFS_DEPRECATED=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_KALLSYMS_ALL=y
# CONFIG_ELF_CORE is not set
CONFIG_SLAB=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
# CONFIG_MSDOS_PARTITION is not set
# CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set
CONFIG_METAG_L2C=y
CONFIG_FLATMEM_MANUAL=y
CONFIG_METAG_HALT_ON_PANIC=y
CONFIG_SMP=y
CONFIG_METAG_DA=y
CONFIG_HZ_100=y
CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=1
CONFIG_BLK_DEV_RAM_SIZE=16384
# CONFIG_INPUT is not set
# CONFIG_SERIO is not set
# CONFIG_VT is not set
# CONFIG_LEGACY_PTYS is not set
CONFIG_DA_TTY=y
CONFIG_DA_CONSOLE=y
# CONFIG_DEVKMEM is not set
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
# CONFIG_USB_SUPPORT is not set
# CONFIG_DNOTIFY is not set
CONFIG_TMPFS=y
# CONFIG_MISC_FILESYSTEMS is not set
# CONFIG_SCHED_DEBUG is not set
CONFIG_DEBUG_INFO=y
generic-y += auxvec.h
generic-y += bitsperlong.h
generic-y += bugs.h
generic-y += clkdev.h
generic-y += cputime.h
generic-y += current.h
generic-y += device.h
generic-y += dma.h
generic-y += emergency-restart.h
generic-y += errno.h
generic-y += exec.h
generic-y += fb.h
generic-y += fcntl.h
generic-y += futex.h
generic-y += hardirq.h
generic-y += hw_irq.h
generic-y += ioctl.h
generic-y += ioctls.h
generic-y += ipcbuf.h
generic-y += irq_regs.h
generic-y += kdebug.h
generic-y += kmap_types.h
generic-y += kvm_para.h
generic-y += local.h
generic-y += local64.h
generic-y += msgbuf.h
generic-y += mutex.h
generic-y += param.h
generic-y += pci.h
generic-y += percpu.h
generic-y += poll.h
generic-y += posix_types.h
generic-y += scatterlist.h
generic-y += sections.h
generic-y += sembuf.h
generic-y += serial.h
generic-y += shmbuf.h
generic-y += shmparam.h
generic-y += signal.h
generic-y += socket.h
generic-y += sockios.h
generic-y += stat.h
generic-y += statfs.h
generic-y += switch_to.h
generic-y += termbits.h
generic-y += termios.h
generic-y += timex.h
generic-y += trace_clock.h
generic-y += types.h
generic-y += ucontext.h
generic-y += unaligned.h
generic-y += user.h
generic-y += vga.h
generic-y += xor.h
#ifndef __ASM_METAG_ATOMIC_H
#define __ASM_METAG_ATOMIC_H
#include <linux/compiler.h>
#include <linux/types.h>
#include <asm/cmpxchg.h>
#if defined(CONFIG_METAG_ATOMICITY_IRQSOFF)
/* The simple UP case. */
#include <asm-generic/atomic.h>
#else
#if defined(CONFIG_METAG_ATOMICITY_LOCK1)
#include <asm/atomic_lock1.h>
#else
#include <asm/atomic_lnkget.h>
#endif
#define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0)
#define atomic_dec_return(v) atomic_sub_return(1, (v))
#define atomic_inc_return(v) atomic_add_return(1, (v))
/*
* atomic_inc_and_test - increment and test
* @v: pointer of type atomic_t
*
* Atomically increments @v by 1
* and returns true if the result is zero, or false for all
* other cases.
*/
#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0)
#define atomic_sub_and_test(i, v) (atomic_sub_return((i), (v)) == 0)
#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0)
#define atomic_inc(v) atomic_add(1, (v))
#define atomic_dec(v) atomic_sub(1, (v))
#define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0)
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
#endif
#define atomic_dec_if_positive(v) atomic_sub_if_positive(1, v)
#include <asm-generic/atomic64.h>
#endif /* __ASM_METAG_ATOMIC_H */
#ifndef __ASM_METAG_ATOMIC_LNKGET_H
#define __ASM_METAG_ATOMIC_LNKGET_H
#define ATOMIC_INIT(i) { (i) }
#define atomic_set(v, i) ((v)->counter = (i))
#include <linux/compiler.h>
#include <asm/barrier.h>
/*
* None of these asm statements clobber memory as LNKSET writes around
* the cache so the memory it modifies cannot safely be read by any means
* other than these accessors.
*/
static inline int atomic_read(const atomic_t *v)
{
int temp;
asm volatile (
"LNKGETD %0, [%1]\n"
: "=da" (temp)
: "da" (&v->counter));
return temp;
}
static inline void atomic_add(int i, atomic_t *v)
{
int temp;
asm volatile (
"1: LNKGETD %0, [%1]\n"
" ADD %0, %0, %2\n"
" LNKSETD [%1], %0\n"
" DEFR %0, TXSTAT\n"
" ANDT %0, %0, #HI(0x3f000000)\n"
" CMPT %0, #HI(0x02000000)\n"
" BNZ 1b\n"
: "=&d" (temp)
: "da" (&v->counter), "bd" (i)
: "cc");
}
static inline void atomic_sub(int i, atomic_t *v)
{
int temp;
asm volatile (
"1: LNKGETD %0, [%1]\n"
" SUB %0, %0, %2\n"
" LNKSETD [%1], %0\n"
" DEFR %0, TXSTAT\n"
" ANDT %0, %0, #HI(0x3f000000)\n"
" CMPT %0, #HI(0x02000000)\n"
" BNZ 1b\n"
: "=&d" (temp)
: "da" (&v->counter), "bd" (i)
: "cc");
}
static inline int atomic_add_return(int i, atomic_t *v)
{
int result, temp;
smp_mb();
asm volatile (
"1: LNKGETD %1, [%2]\n"
" ADD %1, %1, %3\n"
" LNKSETD [%2], %1\n"
" DEFR %0, TXSTAT\n"
" ANDT %0, %0, #HI(0x3f000000)\n"
" CMPT %0, #HI(0x02000000)\n"
" BNZ 1b\n"
: "=&d" (temp), "=&da" (result)
: "da" (&v->counter), "bd" (i)
: "cc");
smp_mb();
return result;
}
static inline int atomic_sub_return(int i, atomic_t *v)
{
int result, temp;
smp_mb();
asm volatile (
"1: LNKGETD %1, [%2]\n"
" SUB %1, %1, %3\n"
" LNKSETD [%2], %1\n"
" DEFR %0, TXSTAT\n"
" ANDT %0, %0, #HI(0x3f000000)\n"
" CMPT %0, #HI(0x02000000)\n"
" BNZ 1b\n"
: "=&d" (temp), "=&da" (result)
: "da" (&v->counter), "bd" (i)
: "cc");
smp_mb();
return result;
}
static inline void atomic_clear_mask(unsigned int mask, atomic_t *v)
{
int temp;
asm volatile (
"1: LNKGETD %0, [%1]\n"
" AND %0, %0, %2\n"
" LNKSETD [%1] %0\n"
" DEFR %0, TXSTAT\n"
" ANDT %0, %0, #HI(0x3f000000)\n"
" CMPT %0, #HI(0x02000000)\n"
" BNZ 1b\n"
: "=&d" (temp)
: "da" (&v->counter), "bd" (~mask)
: "cc");
}
static inline void atomic_set_mask(unsigned int mask, atomic_t *v)
{
int temp;
asm volatile (
"1: LNKGETD %0, [%1]\n"
" OR %0, %0, %2\n"
" LNKSETD [%1], %0\n"
" DEFR %0, TXSTAT\n"
" ANDT %0, %0, #HI(0x3f000000)\n"
" CMPT %0, #HI(0x02000000)\n"
" BNZ 1b\n"
: "=&d" (temp)
: "da" (&v->counter), "bd" (mask)
: "cc");
}
static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
{
int result, temp;
smp_mb();
asm volatile (
"1: LNKGETD %1, [%2]\n"
" CMP %1, %3\n"
" LNKSETDEQ [%2], %4\n"
" BNE 2f\n"
" DEFR %0, TXSTAT\n"
" ANDT %0, %0, #HI(0x3f000000)\n"
" CMPT %0, #HI(0x02000000)\n"
" BNZ 1b\n"
"2:\n"
: "=&d" (temp), "=&d" (result)
: "da" (&v->counter), "bd" (old), "da" (new)
: "cc");
smp_mb();
return result;
}
static inline int atomic_xchg(atomic_t *v, int new)
{
int temp, old;
asm volatile (
"1: LNKGETD %1, [%2]\n"
" LNKSETD [%2], %3\n"
" DEFR %0, TXSTAT\n"
" ANDT %0, %0, #HI(0x3f000000)\n"
" CMPT %0, #HI(0x02000000)\n"
" BNZ 1b\n"
: "=&d" (temp), "=&d" (old)
: "da" (&v->counter), "da" (new)
: "cc");
return old;
}
static inline int __atomic_add_unless(atomic_t *v, int a, int u)
{
int result, temp;
smp_mb();
asm volatile (
"1: LNKGETD %1, [%2]\n"
" CMP %1, %3\n"
" ADD %0, %1, %4\n"
" LNKSETDNE [%2], %0\n"
" BEQ 2f\n"
" DEFR %0, TXSTAT\n"
" ANDT %0, %0, #HI(0x3f000000)\n"
" CMPT %0, #HI(0x02000000)\n"
" BNZ 1b\n"
"2:\n"
: "=&d" (temp), "=&d" (result)
: "da" (&v->counter), "bd" (u), "bd" (a)
: "cc");
smp_mb();
return result;
}
static inline int atomic_sub_if_positive(int i, atomic_t *v)
{
int result, temp;
asm volatile (
"1: LNKGETD %1, [%2]\n"
" SUBS %1, %1, %3\n"
" LNKSETDGE [%2], %1\n"
" BLT 2f\n"
" DEFR %0, TXSTAT\n"
" ANDT %0, %0, #HI(0x3f000000)\n"
" CMPT %0, #HI(0x02000000)\n"
" BNZ 1b\n"
"2:\n"
: "=&d" (temp), "=&da" (result)
: "da" (&v->counter), "bd" (i)
: "cc");
return result;
}
#endif /* __ASM_METAG_ATOMIC_LNKGET_H */
#ifndef __ASM_METAG_ATOMIC_LOCK1_H
#define __ASM_METAG_ATOMIC_LOCK1_H
#define ATOMIC_INIT(i) { (i) }
#include <linux/compiler.h>
#include <asm/barrier.h>
#include <asm/global_lock.h>
static inline int atomic_read(const atomic_t *v)
{
return (v)->counter;
}
/*
* atomic_set needs to be take the lock to protect atomic_add_unless from a
* possible race, as it reads the counter twice:
*
* CPU0 CPU1
* atomic_add_unless(1, 0)
* ret = v->counter (non-zero)
* if (ret != u) v->counter = 0
* v->counter += 1 (counter set to 1)
*
* Making atomic_set take the lock ensures that ordering and logical
* consistency is preserved.
*/
static inline int atomic_set(atomic_t *v, int i)
{
unsigned long flags;
__global_lock1(flags);
fence();
v->counter = i;
__global_unlock1(flags);
return i;
}
static inline void atomic_add(int i, atomic_t *v)
{
unsigned long flags;
__global_lock1(flags);
fence();
v->counter += i;
__global_unlock1(flags);
}
static inline void atomic_sub(int i, atomic_t *v)
{
unsigned long flags;
__global_lock1(flags);
fence();
v->counter -= i;
__global_unlock1(flags);
}
static inline int atomic_add_return(int i, atomic_t *v)
{
unsigned long result;
unsigned long flags;
__global_lock1(flags);
result = v->counter;
result += i;
fence();
v->counter = result;
__global_unlock1(flags);
return result;
}
static inline int atomic_sub_return(int i, atomic_t *v)
{
unsigned long result;
unsigned long flags;
__global_lock1(flags);
result = v->counter;
result -= i;
fence();
v->counter = result;
__global_unlock1(flags);
return result;
}
static inline void atomic_clear_mask(unsigned int mask, atomic_t *v)
{
unsigned long flags;
__global_lock1(flags);
fence();
v->counter &= ~mask;
__global_unlock1(flags);
}
static inline void atomic_set_mask(unsigned int mask, atomic_t *v)
{
unsigned long flags;
__global_lock1(flags);
fence();
v->counter |= mask;
__global_unlock1(flags);
}
static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
{
int ret;
unsigned long flags;
__global_lock1(flags);
ret = v->counter;
if (ret == old) {
fence();
v->counter = new;
}
__global_unlock1(flags);
return ret;
}
#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
static inline int __atomic_add_unless(atomic_t *v, int a, int u)
{
int ret;
unsigned long flags;
__global_lock1(flags);
ret = v->counter;
if (ret != u) {
fence();
v->counter += a;
}
__global_unlock1(flags);
return ret;
}
static inline int atomic_sub_if_positive(int i, atomic_t *v)
{
int ret;
unsigned long flags;
__global_lock1(flags);
ret = v->counter - 1;
if (ret >= 0) {
fence();
v->counter = ret;
}
__global_unlock1(flags);
return ret;
}
#endif /* __ASM_METAG_ATOMIC_LOCK1_H */
#ifndef _ASM_METAG_BARRIER_H
#define _ASM_METAG_BARRIER_H
#include <asm/metag_mem.h>
#define nop() asm volatile ("NOP")
#define mb() wmb()
#define rmb() barrier()
#ifdef CONFIG_METAG_META21
/* HTP and above have a system event to fence writes */
static inline void wr_fence(void)
{
volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_FENCE;
barrier();
*flushptr = 0;
}
#else /* CONFIG_METAG_META21 */
/*
* ATP doesn't have system event to fence writes, so it is necessary to flush
* the processor write queues as well as possibly the write combiner (depending
* on the page being written).
* To ensure the write queues are flushed we do 4 writes to a system event
* register (in this case write combiner flush) which will also flush the write
* combiner.
*/
static inline void wr_fence(void)
{
volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_COMBINE_FLUSH;
barrier();
*flushptr = 0;
*flushptr = 0;
*flushptr = 0;
*flushptr = 0;
}
#endif /* !CONFIG_METAG_META21 */
static inline void wmb(void)
{
/* flush writes through the write combiner */
wr_fence();
}
#define read_barrier_depends() do { } while (0)
#ifndef CONFIG_SMP
#define fence() do { } while (0)
#define smp_mb() barrier()
#define smp_rmb() barrier()
#define smp_wmb() barrier()
#else
#ifdef CONFIG_METAG_SMP_WRITE_REORDERING
/*
* Write to the atomic memory unlock system event register (command 0). This is
* needed before a write to shared memory in a critical section, to prevent
* external reordering of writes before the fence on other threads with writes
* after the fence on this thread (and to prevent the ensuing cache-memory
* incoherence). It is therefore ineffective if used after and on the same
* thread as a write.
*/
static inline void fence(void)
{
volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
barrier();
*flushptr = 0;
}
#define smp_mb() fence()
#define smp_rmb() fence()
#define smp_wmb() barrier()
#else
#define fence() do { } while (0)
#define smp_mb() barrier()
#define smp_rmb() barrier()
#define smp_wmb() barrier()
#endif
#endif
#define smp_read_barrier_depends() do { } while (0)
#define set_mb(var, value) do { var = value; smp_mb(); } while (0)
#endif /* _ASM_METAG_BARRIER_H */
#ifndef __ASM_METAG_BITOPS_H
#define __ASM_METAG_BITOPS_H
#include <linux/compiler.h>
#include <asm/barrier.h>
#include <asm/global_lock.h>
/*
* clear_bit() doesn't provide any barrier for the compiler.
*/
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
#ifdef CONFIG_SMP
/*
* These functions are the basis of our bit ops.
*/
static inline void set_bit(unsigned int bit, volatile unsigned long *p)
{
unsigned long flags;
unsigned long mask = 1UL << (bit & 31);
p += bit >> 5;
__global_lock1(flags);
fence();
*p |= mask;
__global_unlock1(flags);
}
static inline void clear_bit(unsigned int bit, volatile unsigned long *p)
{
unsigned long flags;
unsigned long mask = 1UL << (bit & 31);
p += bit >> 5;
__global_lock1(flags);
fence();
*p &= ~mask;
__global_unlock1(flags);
}
static inline void change_bit(unsigned int bit, volatile unsigned long *p)
{
unsigned long flags;
unsigned long mask = 1UL << (bit & 31);
p += bit >> 5;
__global_lock1(flags);
fence();
*p ^= mask;
__global_unlock1(flags);
}
static inline int test_and_set_bit(unsigned int bit, volatile unsigned long *p)
{
unsigned long flags;
unsigned long old;
unsigned long mask = 1UL << (bit & 31);
p += bit >> 5;
__global_lock1(flags);
old = *p;
if (!(old & mask)) {
fence();
*p = old | mask;
}
__global_unlock1(flags);
return (old & mask) != 0;
}
static inline int test_and_clear_bit(unsigned int bit,
volatile unsigned long *p)
{
unsigned long flags;
unsigned long old;
unsigned long mask = 1UL << (bit & 31);
p += bit >> 5;
__global_lock1(flags);
old = *p;
if (old & mask) {
fence();
*p = old & ~mask;
}
__global_unlock1(flags);
return (old & mask) != 0;
}
static inline int test_and_change_bit(unsigned int bit,
volatile unsigned long *p)
{
unsigned long flags;
unsigned long old;
unsigned long mask = 1UL << (bit & 31);
p += bit >> 5;
__global_lock1(flags);
fence();
old = *p;
*p = old ^ mask;
__global_unlock1(flags);
return (old & mask) != 0;
}
#else
#include <asm-generic/bitops/atomic.h>
#endif /* CONFIG_SMP */
#include <asm-generic/bitops/non-atomic.h>
#include <asm-generic/bitops/find.h>
#include <asm-generic/bitops/ffs.h>
#include <asm-generic/bitops/__ffs.h>
#include <asm-generic/bitops/ffz.h>
#include <asm-generic/bitops/fls.h>
#include <asm-generic/bitops/__fls.h>
#include <asm-generic/bitops/fls64.h>
#include <asm-generic/bitops/hweight.h>
#include <asm-generic/bitops/lock.h>
#include <asm-generic/bitops/sched.h>
#include <asm-generic/bitops/le.h>
#include <asm-generic/bitops/ext2-atomic.h>
#endif /* __ASM_METAG_BITOPS_H */
#ifndef _ASM_METAG_BUG_H
#define _ASM_METAG_BUG_H
#include <asm-generic/bug.h>
struct pt_regs;
extern const char *trap_name(int trapno);
extern void die(const char *str, struct pt_regs *regs, long err,
unsigned long addr) __attribute__ ((noreturn));
#endif
#ifndef __ASM_METAG_CACHE_H
#define __ASM_METAG_CACHE_H
/* L1 cache line size (64 bytes) */
#define L1_CACHE_SHIFT 6
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
/* Meta requires large data items to be 8 byte aligned. */
#define ARCH_SLAB_MINALIGN 8
/*
* With an L2 cache, we may invalidate dirty lines, so we need to ensure DMA
* buffers have cache line alignment.
*/
#ifdef CONFIG_METAG_L2C
#define ARCH_DMA_MINALIGN L1_CACHE_BYTES
#else
#define ARCH_DMA_MINALIGN 8
#endif
#define __read_mostly __attribute__((__section__(".data..read_mostly")))
#endif
#ifndef _METAG_CACHEFLUSH_H
#define _METAG_CACHEFLUSH_H
#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/io.h>
#include <asm/l2cache.h>
#include <asm/metag_isa.h>
#include <asm/metag_mem.h>
void metag_cache_probe(void);
void metag_data_cache_flush_all(const void *start);
void metag_code_cache_flush_all(const void *start);
/*
* Routines to flush physical cache lines that may be used to cache data or code
* normally accessed via the linear address range supplied. The region flushed
* must either lie in local or global address space determined by the top bit of
* the pStart address. If Bytes is >= 4K then the whole of the related cache
* state will be flushed rather than a limited range.
*/
void metag_data_cache_flush(const void *start, int bytes);
void metag_code_cache_flush(const void *start, int bytes);
#ifdef CONFIG_METAG_META12
/* Write through, virtually tagged, split I/D cache. */
static inline void __flush_cache_all(void)
{
metag_code_cache_flush_all((void *) PAGE_OFFSET);
metag_data_cache_flush_all((void *) PAGE_OFFSET);
}
#define flush_cache_all() __flush_cache_all()
/* flush the entire user address space referenced in this mm structure */
static inline void flush_cache_mm(struct mm_struct *mm)
{
if (mm == current->mm)
__flush_cache_all();
}
#define flush_cache_dup_mm(mm) flush_cache_mm(mm)
/* flush a range of addresses from this mm */
static inline void flush_cache_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
flush_cache_mm(vma->vm_mm);
}
static inline void flush_cache_page(struct vm_area_struct *vma,
unsigned long vmaddr, unsigned long pfn)
{
flush_cache_mm(vma->vm_mm);
}
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
static inline void flush_dcache_page(struct page *page)
{
metag_data_cache_flush_all((void *) PAGE_OFFSET);
}
#define flush_dcache_mmap_lock(mapping) do { } while (0)
#define flush_dcache_mmap_unlock(mapping) do { } while (0)
static inline void flush_icache_page(struct vm_area_struct *vma,
struct page *page)
{
metag_code_cache_flush(page_to_virt(page), PAGE_SIZE);
}
static inline void flush_cache_vmap(unsigned long start, unsigned long end)
{
metag_data_cache_flush_all((void *) PAGE_OFFSET);
}
static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
{
metag_data_cache_flush_all((void *) PAGE_OFFSET);
}
#else
/* Write through, physically tagged, split I/D cache. */
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define flush_dcache_mmap_lock(mapping) do { } while (0)
#define flush_dcache_mmap_unlock(mapping) do { } while (0)
#define flush_icache_page(vma, pg) do { } while (0)
#define flush_cache_vmap(start, end) do { } while (0)
#define flush_cache_vunmap(start, end) do { } while (0)
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
static inline void flush_dcache_page(struct page *page)
{
/* FIXME: We can do better than this. All we are trying to do is
* make the i-cache coherent, we should use the PG_arch_1 bit like
* e.g. powerpc.
*/
#ifdef CONFIG_SMP
metag_out32(1, SYSC_ICACHE_FLUSH);
#else
metag_code_cache_flush_all((void *) PAGE_OFFSET);
#endif
}
#endif
/* Push n pages at kernel virtual address and clear the icache */
static inline void flush_icache_range(unsigned long address,
unsigned long endaddr)
{
#ifdef CONFIG_SMP
metag_out32(1, SYSC_ICACHE_FLUSH);
#else
metag_code_cache_flush((void *) address, endaddr - address);
#endif
}
static inline void flush_cache_sigtramp(unsigned long addr, int size)
{
/*
* Flush the icache in case there was previously some code
* fetched from this address, perhaps a previous sigtramp.
*
* We don't need to flush the dcache, it's write through and
* we just wrote the sigtramp code through it.
*/
#ifdef CONFIG_SMP
metag_out32(1, SYSC_ICACHE_FLUSH);
#else
metag_code_cache_flush((void *) addr, size);
#endif
}
#ifdef CONFIG_METAG_L2C
/*
* Perform a single specific CACHEWD operation on an address, masking lower bits
* of address first.
*/
static inline void cachewd_line(void *addr, unsigned int data)
{
unsigned long masked = (unsigned long)addr & -0x40;
__builtin_meta2_cachewd((void *)masked, data);
}
/* Perform a certain CACHEW op on each cache line in a range */
static inline void cachew_region_op(void *start, unsigned long size,
unsigned int op)
{
unsigned long offset = (unsigned long)start & 0x3f;
int i;
if (offset) {
size += offset;
start -= offset;
}
i = (size - 1) >> 6;
do {
__builtin_meta2_cachewd(start, op);
start += 0x40;
} while (i--);
}
/* prevent write fence and flushbacks being reordered in L2 */
static inline void l2c_fence_flush(void *addr)
{
/*
* Synchronise by reading back and re-flushing.
* It is assumed this access will miss, as the caller should have just
* flushed the cache line.
*/
(void)(volatile u8 *)addr;
cachewd_line(addr, CACHEW_FLUSH_L1D_L2);
}
/* prevent write fence and writebacks being reordered in L2 */
static inline void l2c_fence(void *addr)
{
/*
* A write back has occurred, but not necessarily an invalidate, so the
* readback in l2c_fence_flush() would hit in the cache and have no
* effect. Therefore fully flush the line first.
*/
cachewd_line(addr, CACHEW_FLUSH_L1D_L2);
l2c_fence_flush(addr);
}
/* Used to keep memory consistent when doing DMA. */
static inline void flush_dcache_region(void *start, unsigned long size)
{
/* metag_data_cache_flush won't flush L2 cache lines if size >= 4096 */
if (meta_l2c_is_enabled()) {
cachew_region_op(start, size, CACHEW_FLUSH_L1D_L2);
if (meta_l2c_is_writeback())
l2c_fence_flush(start + size - 1);
} else {
metag_data_cache_flush(start, size);
}
}
/* Write back dirty lines to memory (or do nothing if no writeback caches) */
static inline void writeback_dcache_region(void *start, unsigned long size)
{
if (meta_l2c_is_enabled() && meta_l2c_is_writeback()) {
cachew_region_op(start, size, CACHEW_WRITEBACK_L1D_L2);
l2c_fence(start + size - 1);
}
}
/* Invalidate (may also write back if necessary) */
static inline void invalidate_dcache_region(void *start, unsigned long size)
{
if (meta_l2c_is_enabled())
cachew_region_op(start, size, CACHEW_INVALIDATE_L1D_L2);
else
metag_data_cache_flush(start, size);
}
#else
#define flush_dcache_region(s, l) metag_data_cache_flush((s), (l))
#define writeback_dcache_region(s, l) do {} while (0)
#define invalidate_dcache_region(s, l) flush_dcache_region((s), (l))
#endif
static inline void copy_to_user_page(struct vm_area_struct *vma,
struct page *page, unsigned long vaddr,
void *dst, const void *src,
unsigned long len)
{
memcpy(dst, src, len);
flush_icache_range((unsigned long)dst, (unsigned long)dst + len);
}
static inline void copy_from_user_page(struct vm_area_struct *vma,
struct page *page, unsigned long vaddr,
void *dst, const void *src,
unsigned long len)
{
memcpy(dst, src, len);
}
#endif /* _METAG_CACHEFLUSH_H */
/*
* Meta cache partition manipulation.
*
* Copyright 2010 Imagination Technologies Ltd.
*/
#ifndef _METAG_CACHEPART_H_
#define _METAG_CACHEPART_H_
/**
* get_dcache_size() - Get size of data cache.
*/
unsigned int get_dcache_size(void);
/**
* get_icache_size() - Get size of code cache.
*/
unsigned int get_icache_size(void);
/**
* get_global_dcache_size() - Get the thread's global dcache.
*
* Returns the size of the current thread's global dcache partition.
*/
unsigned int get_global_dcache_size(void);
/**
* get_global_icache_size() - Get the thread's global icache.
*
* Returns the size of the current thread's global icache partition.
*/
unsigned int get_global_icache_size(void);
/**
* check_for_dache_aliasing() - Ensure that the bootloader has configured the
* dache and icache properly to avoid aliasing
* @thread_id: Hardware thread ID
*
*/
void check_for_cache_aliasing(int thread_id);
#endif
#ifndef _METAG_CHECKSUM_H
#define _METAG_CHECKSUM_H
/*
* computes the checksum of a memory block at buff, length len,
* and adds in "sum" (32-bit)
*
* returns a 32-bit number suitable for feeding into itself
* or csum_tcpudp_magic
*
* this function must be called with even lengths, except
* for the last fragment, which may be odd
*
* it's best to have buff aligned on a 32-bit boundary
*/
extern __wsum csum_partial(const void *buff, int len, __wsum sum);
/*
* the same as csum_partial, but copies from src while it
* checksums
*
* here even more important to align src and dst on a 32-bit (or even
* better 64-bit) boundary
*/
extern __wsum csum_partial_copy(const void *src, void *dst, int len,
__wsum sum);
/*
* the same as csum_partial_copy, but copies from user space.
*
* here even more important to align src and dst on a 32-bit (or even
* better 64-bit) boundary
*/
extern __wsum csum_partial_copy_from_user(const void __user *src, void *dst,
int len, __wsum sum, int *csum_err);
#define csum_partial_copy_nocheck(src, dst, len, sum) \
csum_partial_copy((src), (dst), (len), (sum))
/*
* Fold a partial checksum
*/
static inline __sum16 csum_fold(__wsum csum)
{
u32 sum = (__force u32)csum;
sum = (sum & 0xffff) + (sum >> 16);
sum = (sum & 0xffff) + (sum >> 16);
return (__force __sum16)~sum;
}
/*
* This is a version of ip_compute_csum() optimized for IP headers,
* which always checksum on 4 octet boundaries.
*/
extern __sum16 ip_fast_csum(const void *iph, unsigned int ihl);
/*
* computes the checksum of the TCP/UDP pseudo-header
* returns a 16-bit checksum, already complemented
*/
static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
unsigned short len,
unsigned short proto,
__wsum sum)
{
unsigned long len_proto = (proto + len) << 8;
asm ("ADD %0, %0, %1\n"
"ADDS %0, %0, %2\n"
"ADDCS %0, %0, #1\n"
"ADDS %0, %0, %3\n"
"ADDCS %0, %0, #1\n"
: "=d" (sum)
: "d" (daddr), "d" (saddr), "d" (len_proto),
"0" (sum)
: "cc");
return sum;
}
static inline __sum16
csum_tcpudp_magic(__be32 saddr, __be32 daddr, unsigned short len,
unsigned short proto, __wsum sum)
{
return csum_fold(csum_tcpudp_nofold(saddr, daddr, len, proto, sum));
}
/*
* this routine is used for miscellaneous IP-like checksums, mainly
* in icmp.c
*/
extern __sum16 ip_compute_csum(const void *buff, int len);
#endif /* _METAG_CHECKSUM_H */
/*
* arch/metag/include/asm/clock.h
*
* Copyright (C) 2012 Imagination Technologies Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _METAG_CLOCK_H_
#define _METAG_CLOCK_H_
#include <asm/mach/arch.h>
/**
* struct meta_clock_desc - Meta Core clock callbacks.
* @get_core_freq: Get the frequency of the Meta core. If this is NULL, the
* core frequency will be determined like this:
* Meta 1: based on loops_per_jiffy.
* Meta 2: (EXPAND_TIMER_DIV + 1) MHz.
*/
struct meta_clock_desc {
unsigned long (*get_core_freq)(void);
};
extern struct meta_clock_desc _meta_clock;
/*
* Set up the default clock, ensuring all callbacks are valid - only accessible
* during boot.
*/
void setup_meta_clocks(struct meta_clock_desc *desc);
/**
* get_coreclock() - Get the frequency of the Meta core clock.
*
* Returns: The Meta core clock frequency in Hz.
*/
static inline unsigned long get_coreclock(void)
{
/*
* Use the current clock callback. If set correctly this will provide
* the most accurate frequency as it can be calculated directly from the
* PLL configuration. otherwise a default callback will have been set
* instead.
*/
return _meta_clock.get_core_freq();
}
#endif /* _METAG_CLOCK_H_ */
#ifndef __ASM_METAG_CMPXCHG_H
#define __ASM_METAG_CMPXCHG_H
#include <asm/barrier.h>
#if defined(CONFIG_METAG_ATOMICITY_IRQSOFF)
#include <asm/cmpxchg_irq.h>
#elif defined(CONFIG_METAG_ATOMICITY_LOCK1)
#include <asm/cmpxchg_lock1.h>
#elif defined(CONFIG_METAG_ATOMICITY_LNKGET)
#include <asm/cmpxchg_lnkget.h>
#endif
extern void __xchg_called_with_bad_pointer(void);
#define __xchg(ptr, x, size) \
({ \
unsigned long __xchg__res; \
volatile void *__xchg_ptr = (ptr); \
switch (size) { \
case 4: \
__xchg__res = xchg_u32(__xchg_ptr, x); \
break; \
case 1: \
__xchg__res = xchg_u8(__xchg_ptr, x); \
break; \
default: \
__xchg_called_with_bad_pointer(); \
__xchg__res = x; \
break; \
} \
\
__xchg__res; \
})
#define xchg(ptr, x) \
((__typeof__(*(ptr)))__xchg((ptr), (unsigned long)(x), sizeof(*(ptr))))
/* This function doesn't exist, so you'll get a linker error
* if something tries to do an invalid cmpxchg(). */
extern void __cmpxchg_called_with_bad_pointer(void);
static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
unsigned long new, int size)
{
switch (size) {
case 4:
return __cmpxchg_u32(ptr, old, new);
}
__cmpxchg_called_with_bad_pointer();
return old;
}
#define __HAVE_ARCH_CMPXCHG 1
#define cmpxchg(ptr, o, n) \
({ \
__typeof__(*(ptr)) _o_ = (o); \
__typeof__(*(ptr)) _n_ = (n); \
(__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \
(unsigned long)_n_, \
sizeof(*(ptr))); \
})
#endif /* __ASM_METAG_CMPXCHG_H */
#ifndef __ASM_METAG_CMPXCHG_IRQ_H
#define __ASM_METAG_CMPXCHG_IRQ_H
#include <linux/irqflags.h>
static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
{
unsigned long flags, retval;
local_irq_save(flags);
retval = *m;
*m = val;
local_irq_restore(flags);
return retval;
}
static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
{
unsigned long flags, retval;
local_irq_save(flags);
retval = *m;
*m = val & 0xff;
local_irq_restore(flags);
return retval;
}
static inline unsigned long __cmpxchg_u32(volatile int *m, unsigned long old,
unsigned long new)
{
__u32 retval;
unsigned long flags;
local_irq_save(flags);
retval = *m;
if (retval == old)
*m = new;
local_irq_restore(flags); /* implies memory barrier */
return retval;
}
#endif /* __ASM_METAG_CMPXCHG_IRQ_H */
#ifndef __ASM_METAG_CMPXCHG_LNKGET_H
#define __ASM_METAG_CMPXCHG_LNKGET_H
static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
{
int temp, old;
smp_mb();
asm volatile (
"1: LNKGETD %1, [%2]\n"
" LNKSETD [%2], %3\n"
" DEFR %0, TXSTAT\n"
" ANDT %0, %0, #HI(0x3f000000)\n"
" CMPT %0, #HI(0x02000000)\n"
" BNZ 1b\n"
#ifdef CONFIG_METAG_LNKGET_AROUND_CACHE
" DCACHE [%2], %0\n"
#endif
: "=&d" (temp), "=&d" (old)
: "da" (m), "da" (val)
: "cc"
);
smp_mb();
return old;
}
static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
{
int temp, old;
smp_mb();
asm volatile (
"1: LNKGETD %1, [%2]\n"
" LNKSETD [%2], %3\n"
" DEFR %0, TXSTAT\n"
" ANDT %0, %0, #HI(0x3f000000)\n"
" CMPT %0, #HI(0x02000000)\n"
" BNZ 1b\n"
#ifdef CONFIG_METAG_LNKGET_AROUND_CACHE
" DCACHE [%2], %0\n"
#endif
: "=&d" (temp), "=&d" (old)
: "da" (m), "da" (val & 0xff)
: "cc"
);
smp_mb();
return old;
}
static inline unsigned long __cmpxchg_u32(volatile int *m, unsigned long old,
unsigned long new)
{
__u32 retval, temp;
smp_mb();
asm volatile (
"1: LNKGETD %1, [%2]\n"
" CMP %1, %3\n"
" LNKSETDEQ [%2], %4\n"
" BNE 2f\n"
" DEFR %0, TXSTAT\n"
" ANDT %0, %0, #HI(0x3f000000)\n"
" CMPT %0, #HI(0x02000000)\n"
" BNZ 1b\n"
#ifdef CONFIG_METAG_LNKGET_AROUND_CACHE
" DCACHE [%2], %0\n"
#endif
"2:\n"
: "=&d" (temp), "=&da" (retval)
: "da" (m), "bd" (old), "da" (new)
: "cc"
);
smp_mb();
return retval;
}
#endif /* __ASM_METAG_CMPXCHG_LNKGET_H */
#ifndef __ASM_METAG_CMPXCHG_LOCK1_H
#define __ASM_METAG_CMPXCHG_LOCK1_H
#include <asm/global_lock.h>
/* Use LOCK2 as these have to be atomic w.r.t. ordinary accesses. */
static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
{
unsigned long flags, retval;
__global_lock2(flags);
fence();
retval = *m;
*m = val;
__global_unlock2(flags);
return retval;
}
static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
{
unsigned long flags, retval;
__global_lock2(flags);
fence();
retval = *m;
*m = val & 0xff;
__global_unlock2(flags);
return retval;
}
static inline unsigned long __cmpxchg_u32(volatile int *m, unsigned long old,
unsigned long new)
{
__u32 retval;
unsigned long flags;
__global_lock2(flags);
retval = *m;
if (retval == old) {
fence();
*m = new;
}
__global_unlock2(flags);
return retval;
}
#endif /* __ASM_METAG_CMPXCHG_LOCK1_H */
#ifndef __ASM_METAG_CORE_REG_H_
#define __ASM_METAG_CORE_REG_H_
#include <asm/metag_regs.h>
extern void core_reg_write(int unit, int reg, int thread, unsigned int val);
extern unsigned int core_reg_read(int unit, int reg, int thread);
/*
* These macros allow direct access from C to any register known to the
* assembler. Example candidates are TXTACTCYC, TXIDLECYC, and TXPRIVEXT.
*/
#define __core_reg_get(reg) ({ \
unsigned int __grvalue; \
asm volatile("MOV %0," #reg \
: "=r" (__grvalue)); \
__grvalue; \
})
#define __core_reg_set(reg, value) do { \
unsigned int __srvalue = (value); \
asm volatile("MOV " #reg ",%0" \
: \
: "r" (__srvalue)); \
} while (0)
#define __core_reg_swap(reg, value) do { \
unsigned int __srvalue = (value); \
asm volatile("SWAP " #reg ",%0" \
: "+r" (__srvalue)); \
(value) = __srvalue; \
} while (0)
#endif
#ifndef _ASM_METAG_CPU_H
#define _ASM_METAG_CPU_H
#include <linux/percpu.h>
struct cpuinfo_metag {
struct cpu cpu;
#ifdef CONFIG_SMP
unsigned long loops_per_jiffy;
#endif
};
DECLARE_PER_CPU(struct cpuinfo_metag, cpu_data);
#endif /* _ASM_METAG_CPU_H */
/*
* Meta DA JTAG debugger control.
*
* Copyright 2012 Imagination Technologies Ltd.
*/
#ifndef _METAG_DA_H_
#define _METAG_DA_H_
#ifdef CONFIG_METAG_DA
#include <linux/init.h>
#include <linux/types.h>
extern bool _metag_da_present;
/**
* metag_da_enabled() - Find whether a DA is currently enabled.
*
* Returns: true if a DA was detected, false if not.
*/
static inline bool metag_da_enabled(void)
{
return _metag_da_present;
}
/**
* metag_da_probe() - Try and detect a connected DA.
*
* This is used at start up to detect whether a DA is active.
*
* Returns: 0 on detection, -err otherwise.
*/
int __init metag_da_probe(void);
#else /* !CONFIG_METAG_DA */
#define metag_da_enabled() false
#define metag_da_probe() do {} while (0)
#endif
#endif /* _METAG_DA_H_ */
#ifndef _METAG_DELAY_H
#define _METAG_DELAY_H
/*
* Copyright (C) 1993 Linus Torvalds
*
* Delay routines calling functions in arch/metag/lib/delay.c
*/
/* Undefined functions to get compile-time errors */
extern void __bad_udelay(void);
extern void __bad_ndelay(void);
extern void __udelay(unsigned long usecs);
extern void __ndelay(unsigned long nsecs);
extern void __const_udelay(unsigned long xloops);
extern void __delay(unsigned long loops);
/* 0x10c7 is 2**32 / 1000000 (rounded up) */
#define udelay(n) (__builtin_constant_p(n) ? \
((n) > 20000 ? __bad_udelay() : __const_udelay((n) * 0x10c7ul)) : \
__udelay(n))
/* 0x5 is 2**32 / 1000000000 (rounded up) */
#define ndelay(n) (__builtin_constant_p(n) ? \
((n) > 20000 ? __bad_ndelay() : __const_udelay((n) * 5ul)) : \
__ndelay(n))
#endif /* _METAG_DELAY_H */
#ifndef __ASM_DIV64_H__
#define __ASM_DIV64_H__
#include <asm-generic/div64.h>
extern u64 div_u64(u64 dividend, u64 divisor);
extern s64 div_s64(s64 dividend, s64 divisor);
#define div_u64 div_u64
#define div_s64 div_s64
#endif
#ifndef _ASM_METAG_DMA_MAPPING_H
#define _ASM_METAG_DMA_MAPPING_H
#include <linux/mm.h>
#include <asm/cache.h>
#include <asm/io.h>
#include <linux/scatterlist.h>
#include <asm/bug.h>
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag);
void dma_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle);
void dma_sync_for_device(void *vaddr, size_t size, int dma_direction);
void dma_sync_for_cpu(void *vaddr, size_t size, int dma_direction);
int dma_mmap_coherent(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
int dma_mmap_writecombine(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
static inline dma_addr_t
dma_map_single(struct device *dev, void *ptr, size_t size,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
WARN_ON(size == 0);
dma_sync_for_device(ptr, size, direction);
return virt_to_phys(ptr);
}
static inline void
dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
dma_sync_for_cpu(phys_to_virt(dma_addr), size, direction);
}
static inline int
dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
enum dma_data_direction direction)
{
struct scatterlist *sg;
int i;
BUG_ON(!valid_dma_direction(direction));
WARN_ON(nents == 0 || sglist[0].length == 0);
for_each_sg(sglist, sg, nents, i) {
BUG_ON(!sg_page(sg));
sg->dma_address = sg_phys(sg);
dma_sync_for_device(sg_virt(sg), sg->length, direction);
}
return nents;
}
static inline dma_addr_t
dma_map_page(struct device *dev, struct page *page, unsigned long offset,
size_t size, enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
dma_sync_for_device((void *)(page_to_phys(page) + offset), size,
direction);
return page_to_phys(page) + offset;
}
static inline void
dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
dma_sync_for_cpu(phys_to_virt(dma_address), size, direction);
}
static inline void
dma_unmap_sg(struct device *dev, struct scatterlist *sglist, int nhwentries,
enum dma_data_direction direction)
{
struct scatterlist *sg;
int i;
BUG_ON(!valid_dma_direction(direction));
WARN_ON(nhwentries == 0 || sglist[0].length == 0);
for_each_sg(sglist, sg, nhwentries, i) {
BUG_ON(!sg_page(sg));
sg->dma_address = sg_phys(sg);
dma_sync_for_cpu(sg_virt(sg), sg->length, direction);
}
}
static inline void
dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction);
}
static inline void
dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction)
{
dma_sync_for_device(phys_to_virt(dma_handle), size, direction);
}
static inline void
dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
dma_sync_for_cpu(phys_to_virt(dma_handle)+offset, size,
direction);
}
static inline void
dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
dma_sync_for_device(phys_to_virt(dma_handle)+offset, size,
direction);
}
static inline void
dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
int i;
for (i = 0; i < nelems; i++, sg++)
dma_sync_for_cpu(sg_virt(sg), sg->length, direction);
}
static inline void
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
int i;
for (i = 0; i < nelems; i++, sg++)
dma_sync_for_device(sg_virt(sg), sg->length, direction);
}
static inline int
dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
#define dma_supported(dev, mask) (1)
static inline int
dma_set_mask(struct device *dev, u64 mask)
{
if (!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
return 0;
}
/*
* dma_alloc_noncoherent() returns non-cacheable memory, so there's no need to
* do any flushing here.
*/
static inline void
dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction)
{
}
/* drivers/base/dma-mapping.c */
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif
#ifndef __ASM_METAG_ELF_H
#define __ASM_METAG_ELF_H
#define EM_METAG 174
/* Meta relocations */
#define R_METAG_HIADDR16 0
#define R_METAG_LOADDR16 1
#define R_METAG_ADDR32 2
#define R_METAG_NONE 3
#define R_METAG_RELBRANCH 4
#define R_METAG_GETSETOFF 5
/* Backward compatability */
#define R_METAG_REG32OP1 6
#define R_METAG_REG32OP2 7
#define R_METAG_REG32OP3 8
#define R_METAG_REG16OP1 9
#define R_METAG_REG16OP2 10
#define R_METAG_REG16OP3 11
#define R_METAG_REG32OP4 12
#define R_METAG_HIOG 13
#define R_METAG_LOOG 14
/* GNU */
#define R_METAG_GNU_VTINHERIT 30
#define R_METAG_GNU_VTENTRY 31
/* PIC relocations */
#define R_METAG_HI16_GOTOFF 32
#define R_METAG_LO16_GOTOFF 33
#define R_METAG_GETSET_GOTOFF 34
#define R_METAG_GETSET_GOT 35
#define R_METAG_HI16_GOTPC 36
#define R_METAG_LO16_GOTPC 37
#define R_METAG_HI16_PLT 38
#define R_METAG_LO16_PLT 39
#define R_METAG_RELBRANCH_PLT 40
#define R_METAG_GOTOFF 41
#define R_METAG_PLT 42
#define R_METAG_COPY 43
#define R_METAG_JMP_SLOT 44
#define R_METAG_RELATIVE 45
#define R_METAG_GLOB_DAT 46
/*
* ELF register definitions.
*/
#include <asm/page.h>
#include <asm/processor.h>
#include <asm/ptrace.h>
#include <asm/user.h>
typedef unsigned long elf_greg_t;
#define ELF_NGREG (sizeof(struct user_gp_regs) / sizeof(elf_greg_t))
typedef elf_greg_t elf_gregset_t[ELF_NGREG];
typedef unsigned long elf_fpregset_t;
/*
* This is used to ensure we don't load something for the wrong architecture.
*/
#define elf_check_arch(x) ((x)->e_machine == EM_METAG)
/*
* These are used to set parameters in the core dumps.
*/
#define ELF_CLASS ELFCLASS32
#define ELF_DATA ELFDATA2LSB
#define ELF_ARCH EM_METAG
#define ELF_PLAT_INIT(_r, load_addr) \
do { _r->ctx.AX[0].U0 = 0; } while (0)
#define USE_ELF_CORE_DUMP
#define CORE_DUMP_USE_REGSET
#define ELF_EXEC_PAGESIZE PAGE_SIZE
/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
use of this is to invoke "./ld.so someprog" to test out a new version of
the loader. We need to make sure that it is out of the way of the program
that it will "exec", and that there is sufficient room for the brk. */
#define ELF_ET_DYN_BASE 0x08000000UL
#define ELF_CORE_COPY_REGS(_dest, _regs) \
memcpy((char *)&_dest, (char *)_regs, sizeof(struct pt_regs));
/* This yields a mask that user programs can use to figure out what
instruction set this cpu supports. */
#define ELF_HWCAP (0)
/* This yields a string that ld.so will use to load implementation
specific libraries for optimization. This is more specific in
intent than poking at uname or /proc/cpuinfo. */
#define ELF_PLATFORM (NULL)
#define SET_PERSONALITY(ex) \
set_personality(PER_LINUX | (current->personality & (~PER_MASK)))
#define STACK_RND_MASK (0)
#ifdef CONFIG_METAG_USER_TCM
struct elf32_phdr;
struct file;
unsigned long __metag_elf_map(struct file *filep, unsigned long addr,
struct elf32_phdr *eppnt, int prot, int type,
unsigned long total_size);
static inline unsigned long metag_elf_map(struct file *filep,
unsigned long addr,
struct elf32_phdr *eppnt, int prot,
int type, unsigned long total_size)
{
return __metag_elf_map(filep, addr, eppnt, prot, type, total_size);
}
#define elf_map metag_elf_map
#endif
#endif
/*
* fixmap.h: compile-time virtual memory allocation
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 1998 Ingo Molnar
*
* Support of BIGMEM added by Gerhard Wichert, Siemens AG, July 1999
*/
#ifndef _ASM_FIXMAP_H
#define _ASM_FIXMAP_H
#include <asm/pgtable.h>
#ifdef CONFIG_HIGHMEM
#include <linux/threads.h>
#include <asm/kmap_types.h>
#endif
/*
* Here we define all the compile-time 'special' virtual
* addresses. The point is to have a constant address at
* compile time, but to set the physical address only
* in the boot process. We allocate these special addresses
* from the end of the consistent memory region backwards.
* Also this lets us do fail-safe vmalloc(), we
* can guarantee that these special addresses and
* vmalloc()-ed addresses never overlap.
*
* these 'compile-time allocated' memory buffers are
* fixed-size 4k pages. (or larger if used with an increment
* higher than 1) use fixmap_set(idx,phys) to associate
* physical memory with fixmap indices.
*
* TLB entries of such buffers will not be flushed across
* task switches.
*/
enum fixed_addresses {
#define FIX_N_COLOURS 8
#ifdef CONFIG_HIGHMEM
/* reserved pte's for temporary kernel mappings */
FIX_KMAP_BEGIN,
FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
#endif
__end_of_fixed_addresses
};
#define FIXADDR_TOP (CONSISTENT_START - PAGE_SIZE)
#define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
#define FIXADDR_START ((FIXADDR_TOP - FIXADDR_SIZE) & PMD_MASK)
#define __fix_to_virt(x) (FIXADDR_TOP - ((x) << PAGE_SHIFT))
#define __virt_to_fix(x) ((FIXADDR_TOP - ((x)&PAGE_MASK)) >> PAGE_SHIFT)
extern void __this_fixmap_does_not_exist(void);
/*
* 'index to address' translation. If anyone tries to use the idx
* directly without tranlation, we catch the bug with a NULL-deference
* kernel oops. Illegal ranges of incoming indices are caught too.
*/
static inline unsigned long fix_to_virt(const unsigned int idx)
{
/*
* this branch gets completely eliminated after inlining,
* except when someone tries to use fixaddr indices in an
* illegal way. (such as mixing up address types or using
* out-of-range indices).
*
* If it doesn't get removed, the linker will complain
* loudly with a reasonably clear error message..
*/
if (idx >= __end_of_fixed_addresses)
__this_fixmap_does_not_exist();
return __fix_to_virt(idx);
}
static inline unsigned long virt_to_fix(const unsigned long vaddr)
{
BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START);
return __virt_to_fix(vaddr);
}
#define kmap_get_fixmap_pte(vaddr) \
pte_offset_kernel( \
pmd_offset(pud_offset(pgd_offset_k(vaddr), (vaddr)), (vaddr)), \
(vaddr) \
)
/*
* Called from pgtable_init()
*/
extern void fixrange_init(unsigned long start, unsigned long end,
pgd_t *pgd_base);
#endif
#ifndef _ASM_METAG_FTRACE
#define _ASM_METAG_FTRACE
#ifdef CONFIG_FUNCTION_TRACER
#define MCOUNT_INSN_SIZE 8 /* sizeof mcount call */
#ifndef __ASSEMBLY__
extern void mcount_wrapper(void);
#define MCOUNT_ADDR ((long)(mcount_wrapper))
static inline unsigned long ftrace_call_adjust(unsigned long addr)
{
return addr;
}
struct dyn_arch_ftrace {
/* No extra data needed on metag */
};
#endif /* __ASSEMBLY__ */
#endif /* CONFIG_FUNCTION_TRACER */
#endif /* _ASM_METAG_FTRACE */
#ifndef __ASM_METAG_GLOBAL_LOCK_H
#define __ASM_METAG_GLOBAL_LOCK_H
#include <asm/metag_mem.h>
/**
* __global_lock1() - Acquire global voluntary lock (LOCK1).
* @flags: Variable to store flags into.
*
* Acquires the Meta global voluntary lock (LOCK1), also taking care to disable
* all triggers so we cannot be interrupted, and to enforce a compiler barrier
* so that the compiler cannot reorder memory accesses across the lock.
*
* No other hardware thread will be able to acquire the voluntary or exclusive
* locks until the voluntary lock is released with @__global_unlock1, but they
* may continue to execute as long as they aren't trying to acquire either of
* the locks.
*/
#define __global_lock1(flags) do { \
unsigned int __trval; \
asm volatile("MOV %0,#0\n\t" \
"SWAP %0,TXMASKI\n\t" \
"LOCK1" \
: "=r" (__trval) \
: \
: "memory"); \
(flags) = __trval; \
} while (0)
/**
* __global_unlock1() - Release global voluntary lock (LOCK1).
* @flags: Variable to restore flags from.
*
* Releases the Meta global voluntary lock (LOCK1) acquired with
* @__global_lock1, also taking care to re-enable triggers, and to enforce a
* compiler barrier so that the compiler cannot reorder memory accesses across
* the unlock.
*
* This immediately allows another hardware thread to acquire the voluntary or
* exclusive locks.
*/
#define __global_unlock1(flags) do { \
unsigned int __trval = (flags); \
asm volatile("LOCK0\n\t" \
"MOV TXMASKI,%0" \
: \
: "r" (__trval) \
: "memory"); \
} while (0)
/**
* __global_lock2() - Acquire global exclusive lock (LOCK2).
* @flags: Variable to store flags into.
*
* Acquires the Meta global voluntary lock and global exclusive lock (LOCK2),
* also taking care to disable all triggers so we cannot be interrupted, to take
* the atomic lock (system event) and to enforce a compiler barrier so that the
* compiler cannot reorder memory accesses across the lock.
*
* No other hardware thread will be able to execute code until the locks are
* released with @__global_unlock2.
*/
#define __global_lock2(flags) do { \
unsigned int __trval; \
unsigned int __aloc_hi = LINSYSEVENT_WR_ATOMIC_LOCK & 0xFFFF0000; \
asm volatile("MOV %0,#0\n\t" \
"SWAP %0,TXMASKI\n\t" \
"LOCK2\n\t" \
"SETD [%1+#0x40],D1RtP" \
: "=r&" (__trval) \
: "u" (__aloc_hi) \
: "memory"); \
(flags) = __trval; \
} while (0)
/**
* __global_unlock2() - Release global exclusive lock (LOCK2).
* @flags: Variable to restore flags from.
*
* Releases the Meta global exclusive lock (LOCK2) and global voluntary lock
* acquired with @__global_lock2, also taking care to release the atomic lock
* (system event), re-enable triggers, and to enforce a compiler barrier so that
* the compiler cannot reorder memory accesses across the unlock.
*
* This immediately allows other hardware threads to continue executing and one
* of them to acquire locks.
*/
#define __global_unlock2(flags) do { \
unsigned int __trval = (flags); \
unsigned int __alock_hi = LINSYSEVENT_WR_ATOMIC_LOCK & 0xFFFF0000; \
asm volatile("SETD [%1+#0x00],D1RtP\n\t" \
"LOCK0\n\t" \
"MOV TXMASKI,%0" \
: \
: "r" (__trval), \
"u" (__alock_hi) \
: "memory"); \
} while (0)
#endif /* __ASM_METAG_GLOBAL_LOCK_H */
#ifndef __LINUX_GPIO_H
#warning Include linux/gpio.h instead of asm/gpio.h
#include <linux/gpio.h>
#endif
#ifndef _ASM_HIGHMEM_H
#define _ASM_HIGHMEM_H
#include <asm/cacheflush.h>
#include <asm/kmap_types.h>
#include <asm/fixmap.h>
/*
* Right now we initialize only a single pte table. It can be extended
* easily, subsequent pte tables have to be allocated in one physical
* chunk of RAM.
*/
/*
* Ordering is (from lower to higher memory addresses):
*
* high_memory
* Persistent kmap area
* PKMAP_BASE
* fixed_addresses
* FIXADDR_START
* FIXADDR_TOP
* Vmalloc area
* VMALLOC_START
* VMALLOC_END
*/
#define PKMAP_BASE (FIXADDR_START - PMD_SIZE)
#define LAST_PKMAP PTRS_PER_PTE
#define LAST_PKMAP_MASK (LAST_PKMAP - 1)
#define PKMAP_NR(virt) (((virt) - PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
#define kmap_prot PAGE_KERNEL
static inline void flush_cache_kmaps(void)
{
flush_cache_all();
}
/* declarations for highmem.c */
extern unsigned long highstart_pfn, highend_pfn;
extern pte_t *pkmap_page_table;
extern void *kmap_high(struct page *page);
extern void kunmap_high(struct page *page);
extern void kmap_init(void);
/*
* The following functions are already defined by <linux/highmem.h>
* when CONFIG_HIGHMEM is not set.
*/
#ifdef CONFIG_HIGHMEM
extern void *kmap(struct page *page);
extern void kunmap(struct page *page);
extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
extern struct page *kmap_atomic_to_page(void *ptr);
#endif
#endif
#ifndef _ASM_METAG_HUGETLB_H
#define _ASM_METAG_HUGETLB_H
#include <asm/page.h>
static inline int is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr,
unsigned long len) {
return 0;
}
int prepare_hugepage_range(struct file *file, unsigned long addr,
unsigned long len);
static inline void hugetlb_prefault_arch_hook(struct mm_struct *mm)
{
}
static inline void hugetlb_free_pgd_range(struct mmu_gather *tlb,
unsigned long addr, unsigned long end,
unsigned long floor,
unsigned long ceiling)
{
free_pgd_range(tlb, addr, end, floor, ceiling);
}
static inline void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte)
{
set_pte_at(mm, addr, ptep, pte);
}
static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
{
return ptep_get_and_clear(mm, addr, ptep);
}
static inline void huge_ptep_clear_flush(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep)
{
}
static inline int huge_pte_none(pte_t pte)
{
return pte_none(pte);
}
static inline pte_t huge_pte_wrprotect(pte_t pte)
{
return pte_wrprotect(pte);
}
static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
{
ptep_set_wrprotect(mm, addr, ptep);
}
static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep,
pte_t pte, int dirty)
{
return ptep_set_access_flags(vma, addr, ptep, pte, dirty);
}
static inline pte_t huge_ptep_get(pte_t *ptep)
{
return *ptep;
}
static inline int arch_prepare_hugepage(struct page *page)
{
return 0;
}
static inline void arch_release_hugepage(struct page *page)
{
}
static inline void arch_clear_hugepage_flags(struct page *page)
{
}
#endif /* _ASM_METAG_HUGETLB_H */
/*
* Copyright (C) 2008 Imagination Technologies
*/
#ifndef __METAG_HWTHREAD_H
#define __METAG_HWTHREAD_H
#include <linux/bug.h>
#include <linux/io.h>
#include <asm/metag_mem.h>
#define BAD_HWTHREAD_ID (0xFFU)
#define BAD_CPU_ID (0xFFU)
extern u8 cpu_2_hwthread_id[];
extern u8 hwthread_id_2_cpu[];
/*
* Each hardware thread's Control Unit registers are memory-mapped
* and can therefore be accessed by any other hardware thread.
*
* This helper function returns the memory address where "thread"'s
* register "regnum" is mapped.
*/
static inline
void __iomem *__CU_addr(unsigned int thread, unsigned int regnum)
{
unsigned int base, thread_offset, thread_regnum;
WARN_ON(thread == BAD_HWTHREAD_ID);
base = T0UCTREG0; /* Control unit base */
thread_offset = TnUCTRX_STRIDE * thread;
thread_regnum = TXUCTREGn_STRIDE * regnum;
return (void __iomem *)(base + thread_offset + thread_regnum);
}
#endif /* __METAG_HWTHREAD_H */
#ifndef _ASM_METAG_IO_H
#define _ASM_METAG_IO_H
#include <linux/types.h>
#define IO_SPACE_LIMIT 0
#define page_to_bus page_to_phys
#define bus_to_page phys_to_page
/*
* Generic I/O
*/
#define __raw_readb __raw_readb
static inline u8 __raw_readb(const volatile void __iomem *addr)
{
u8 ret;
asm volatile("GETB %0,[%1]"
: "=da" (ret)
: "da" (addr)
: "memory");
return ret;
}
#define __raw_readw __raw_readw
static inline u16 __raw_readw(const volatile void __iomem *addr)
{
u16 ret;
asm volatile("GETW %0,[%1]"
: "=da" (ret)
: "da" (addr)
: "memory");
return ret;
}
#define __raw_readl __raw_readl
static inline u32 __raw_readl(const volatile void __iomem *addr)
{
u32 ret;
asm volatile("GETD %0,[%1]"
: "=da" (ret)
: "da" (addr)
: "memory");
return ret;
}
#define __raw_readq __raw_readq
static inline u64 __raw_readq(const volatile void __iomem *addr)
{
u64 ret;
asm volatile("GETL %0,%t0,[%1]"
: "=da" (ret)
: "da" (addr)
: "memory");
return ret;
}
#define __raw_writeb __raw_writeb
static inline void __raw_writeb(u8 b, volatile void __iomem *addr)
{
asm volatile("SETB [%0],%1"
:
: "da" (addr),
"da" (b)
: "memory");
}
#define __raw_writew __raw_writew
static inline void __raw_writew(u16 b, volatile void __iomem *addr)
{
asm volatile("SETW [%0],%1"
:
: "da" (addr),
"da" (b)
: "memory");
}
#define __raw_writel __raw_writel
static inline void __raw_writel(u32 b, volatile void __iomem *addr)
{
asm volatile("SETD [%0],%1"
:
: "da" (addr),
"da" (b)
: "memory");
}
#define __raw_writeq __raw_writeq
static inline void __raw_writeq(u64 b, volatile void __iomem *addr)
{
asm volatile("SETL [%0],%1,%t1"
:
: "da" (addr),
"da" (b)
: "memory");
}
/*
* The generic io.h can define all the other generic accessors
*/
#include <asm-generic/io.h>
/*
* Despite being a 32bit architecture, Meta can do 64bit memory accesses
* (assuming the bus supports it).
*/
#define readq __raw_readq
#define writeq __raw_writeq
/*
* Meta specific I/O for accessing non-MMU areas.
*
* These can be provided with a physical address rather than an __iomem pointer
* and should only be used by core architecture code for accessing fixed core
* registers. Generic drivers should use ioremap and the generic I/O accessors.
*/
#define metag_in8(addr) __raw_readb((volatile void __iomem *)(addr))
#define metag_in16(addr) __raw_readw((volatile void __iomem *)(addr))
#define metag_in32(addr) __raw_readl((volatile void __iomem *)(addr))
#define metag_in64(addr) __raw_readq((volatile void __iomem *)(addr))
#define metag_out8(b, addr) __raw_writeb(b, (volatile void __iomem *)(addr))
#define metag_out16(b, addr) __raw_writew(b, (volatile void __iomem *)(addr))
#define metag_out32(b, addr) __raw_writel(b, (volatile void __iomem *)(addr))
#define metag_out64(b, addr) __raw_writeq(b, (volatile void __iomem *)(addr))
/*
* io remapping functions
*/
extern void __iomem *__ioremap(unsigned long offset,
size_t size, unsigned long flags);
extern void __iounmap(void __iomem *addr);
/**
* ioremap - map bus memory into CPU space
* @offset: bus address of the memory
* @size: size of the resource to map
*
* ioremap performs a platform specific sequence of operations to
* make bus memory CPU accessible via the readb/readw/readl/writeb/
* writew/writel functions and the other mmio helpers. The returned
* address is not guaranteed to be usable directly as a virtual
* address.
*/
#define ioremap(offset, size) \
__ioremap((offset), (size), 0)
#define ioremap_nocache(offset, size) \
__ioremap((offset), (size), 0)
#define ioremap_cached(offset, size) \
__ioremap((offset), (size), _PAGE_CACHEABLE)
#define ioremap_wc(offset, size) \
__ioremap((offset), (size), _PAGE_WR_COMBINE)
#define iounmap(addr) \
__iounmap(addr)
#endif /* _ASM_METAG_IO_H */
#ifndef __ASM_METAG_IRQ_H
#define __ASM_METAG_IRQ_H
#ifdef CONFIG_4KSTACKS
extern void irq_ctx_init(int cpu);
extern void irq_ctx_exit(int cpu);
# define __ARCH_HAS_DO_SOFTIRQ
#else
# define irq_ctx_init(cpu) do { } while (0)
# define irq_ctx_exit(cpu) do { } while (0)
#endif
void tbi_startup_interrupt(int);
void tbi_shutdown_interrupt(int);
struct pt_regs;
int tbisig_map(unsigned int hw);
extern void do_IRQ(int irq, struct pt_regs *regs);
#ifdef CONFIG_METAG_SUSPEND_MEM
int traps_save_context(void);
int traps_restore_context(void);
#endif
#include <asm-generic/irq.h>
#ifdef CONFIG_HOTPLUG_CPU
extern void migrate_irqs(void);
#endif
#endif /* __ASM_METAG_IRQ_H */
/*
* IRQ flags handling
*
* This file gets included from lowlevel asm headers too, to provide
* wrapped versions of the local_irq_*() APIs, based on the
* raw_local_irq_*() functions from the lowlevel headers.
*/
#ifndef _ASM_IRQFLAGS_H
#define _ASM_IRQFLAGS_H
#ifndef __ASSEMBLY__
#include <asm/core_reg.h>
#include <asm/metag_regs.h>
#define INTS_OFF_MASK TXSTATI_BGNDHALT_BIT
#ifdef CONFIG_SMP
extern unsigned int get_trigger_mask(void);
#else
extern unsigned int global_trigger_mask;
static inline unsigned int get_trigger_mask(void)
{
return global_trigger_mask;
}
#endif
static inline unsigned long arch_local_save_flags(void)
{
return __core_reg_get(TXMASKI);
}
static inline int arch_irqs_disabled_flags(unsigned long flags)
{
return (flags & ~INTS_OFF_MASK) == 0;
}
static inline int arch_irqs_disabled(void)
{
unsigned long flags = arch_local_save_flags();
return arch_irqs_disabled_flags(flags);
}
static inline unsigned long __irqs_disabled(void)
{
/*
* We shouldn't enable exceptions if they are not already
* enabled. This is required for chancalls to work correctly.
*/
return arch_local_save_flags() & INTS_OFF_MASK;
}
/*
* For spinlocks, etc:
*/
static inline unsigned long arch_local_irq_save(void)
{
unsigned long flags = __irqs_disabled();
asm volatile("SWAP %0,TXMASKI\n" : "=r" (flags) : "0" (flags)
: "memory");
return flags;
}
static inline void arch_local_irq_restore(unsigned long flags)
{
asm volatile("MOV TXMASKI,%0\n" : : "r" (flags) : "memory");
}
static inline void arch_local_irq_disable(void)
{
unsigned long flags = __irqs_disabled();
asm volatile("MOV TXMASKI,%0\n" : : "r" (flags) : "memory");
}
#ifdef CONFIG_SMP
/* Avoid circular include dependencies through <linux/preempt.h> */
void arch_local_irq_enable(void);
#else
static inline void arch_local_irq_enable(void)
{
arch_local_irq_restore(get_trigger_mask());
}
#endif
#endif /* (__ASSEMBLY__) */
#endif /* !(_ASM_IRQFLAGS_H) */
#ifndef _METAG_L2CACHE_H
#define _METAG_L2CACHE_H
#ifdef CONFIG_METAG_L2C
#include <asm/global_lock.h>
#include <asm/io.h>
/*
* Store the last known value of pfenable (we don't want prefetch enabled while
* L2 is off).
*/
extern int l2c_pfenable;
/* defined in arch/metag/drivers/core-sysfs.c */
extern struct sysdev_class cache_sysclass;
static inline void wr_fence(void);
/*
* Functions for reading of L2 cache configuration.
*/
/* Get raw L2 config register (CORE_CONFIG3) */
static inline unsigned int meta_l2c_config(void)
{
const unsigned int *corecfg3 = (const unsigned int *)METAC_CORE_CONFIG3;
return *corecfg3;
}
/* Get whether the L2 is present */
static inline int meta_l2c_is_present(void)
{
return meta_l2c_config() & METAC_CORECFG3_L2C_HAVE_L2C_BIT;
}
/* Get whether the L2 is configured for write-back instead of write-through */
static inline int meta_l2c_is_writeback(void)
{
return meta_l2c_config() & METAC_CORECFG3_L2C_MODE_BIT;
}
/* Get whether the L2 is unified instead of separated code/data */
static inline int meta_l2c_is_unified(void)
{
return meta_l2c_config() & METAC_CORECFG3_L2C_UNIFIED_BIT;
}
/* Get the L2 cache size in bytes */
static inline unsigned int meta_l2c_size(void)
{
unsigned int size_s;
if (!meta_l2c_is_present())
return 0;
size_s = (meta_l2c_config() & METAC_CORECFG3_L2C_SIZE_BITS)
>> METAC_CORECFG3_L2C_SIZE_S;
/* L2CSIZE is in KiB */
return 1024 << size_s;
}
/* Get the number of ways in the L2 cache */
static inline unsigned int meta_l2c_ways(void)
{
unsigned int ways_s;
if (!meta_l2c_is_present())
return 0;
ways_s = (meta_l2c_config() & METAC_CORECFG3_L2C_NUM_WAYS_BITS)
>> METAC_CORECFG3_L2C_NUM_WAYS_S;
return 0x1 << ways_s;
}
/* Get the line size of the L2 cache */
static inline unsigned int meta_l2c_linesize(void)
{
unsigned int line_size;
if (!meta_l2c_is_present())
return 0;
line_size = (meta_l2c_config() & METAC_CORECFG3_L2C_LINE_SIZE_BITS)
>> METAC_CORECFG3_L2C_LINE_SIZE_S;
switch (line_size) {
case METAC_CORECFG3_L2C_LINE_SIZE_64B:
return 64;
default:
return 0;
}
}
/* Get the revision ID of the L2 cache */
static inline unsigned int meta_l2c_revision(void)
{
return (meta_l2c_config() & METAC_CORECFG3_L2C_REV_ID_BITS)
>> METAC_CORECFG3_L2C_REV_ID_S;
}
/*
* Start an initialisation of the L2 cachelines and wait for completion.
* This should only be done in a LOCK1 or LOCK2 critical section while the L2
* is disabled.
*/
static inline void _meta_l2c_init(void)
{
metag_out32(SYSC_L2C_INIT_INIT, SYSC_L2C_INIT);
while (metag_in32(SYSC_L2C_INIT) == SYSC_L2C_INIT_IN_PROGRESS)
/* do nothing */;
}
/*
* Start a writeback of dirty L2 cachelines and wait for completion.
* This should only be done in a LOCK1 or LOCK2 critical section.
*/
static inline void _meta_l2c_purge(void)
{
metag_out32(SYSC_L2C_PURGE_PURGE, SYSC_L2C_PURGE);
while (metag_in32(SYSC_L2C_PURGE) == SYSC_L2C_PURGE_IN_PROGRESS)
/* do nothing */;
}
/* Set whether the L2 cache is enabled. */
static inline void _meta_l2c_enable(int enabled)
{
unsigned int enable;
enable = metag_in32(SYSC_L2C_ENABLE);
if (enabled)
enable |= SYSC_L2C_ENABLE_ENABLE_BIT;
else
enable &= ~SYSC_L2C_ENABLE_ENABLE_BIT;
metag_out32(enable, SYSC_L2C_ENABLE);
}
/* Set whether the L2 cache prefetch is enabled. */
static inline void _meta_l2c_pf_enable(int pfenabled)
{
unsigned int enable;
enable = metag_in32(SYSC_L2C_ENABLE);
if (pfenabled)
enable |= SYSC_L2C_ENABLE_PFENABLE_BIT;
else
enable &= ~SYSC_L2C_ENABLE_PFENABLE_BIT;
metag_out32(enable, SYSC_L2C_ENABLE);
}
/* Return whether the L2 cache is enabled */
static inline int _meta_l2c_is_enabled(void)
{
return metag_in32(SYSC_L2C_ENABLE) & SYSC_L2C_ENABLE_ENABLE_BIT;
}
/* Return whether the L2 cache prefetch is enabled */
static inline int _meta_l2c_pf_is_enabled(void)
{
return metag_in32(SYSC_L2C_ENABLE) & SYSC_L2C_ENABLE_PFENABLE_BIT;
}
/* Return whether the L2 cache is enabled */
static inline int meta_l2c_is_enabled(void)
{
int en;
/*
* There is no need to lock at the moment, as the enable bit is never
* intermediately changed, so we will never see an intermediate result.
*/
en = _meta_l2c_is_enabled();
return en;
}
/*
* Ensure the L2 cache is disabled.
* Return whether the L2 was previously disabled.
*/
int meta_l2c_disable(void);
/*
* Ensure the L2 cache is enabled.
* Return whether the L2 was previously enabled.
*/
int meta_l2c_enable(void);
/* Return whether the L2 cache prefetch is enabled */
static inline int meta_l2c_pf_is_enabled(void)
{
return l2c_pfenable;
}
/*
* Set whether the L2 cache prefetch is enabled.
* Return whether the L2 prefetch was previously enabled.
*/
int meta_l2c_pf_enable(int pfenable);
/*
* Flush the L2 cache.
* Return 1 if the L2 is disabled.
*/
int meta_l2c_flush(void);
/*
* Write back all dirty cache lines in the L2 cache.
* Return 1 if the L2 is disabled or there isn't any writeback.
*/
static inline int meta_l2c_writeback(void)
{
unsigned long flags;
int en;
/* no need to purge if it's not a writeback cache */
if (!meta_l2c_is_writeback())
return 1;
/*
* Purge only works if the L2 is enabled, and involves reading back to
* detect completion, so keep this operation atomic with other threads.
*/
__global_lock1(flags);
en = meta_l2c_is_enabled();
if (likely(en)) {
wr_fence();
_meta_l2c_purge();
}
__global_unlock1(flags);
return !en;
}
#else /* CONFIG_METAG_L2C */
#define meta_l2c_config() 0
#define meta_l2c_is_present() 0
#define meta_l2c_is_writeback() 0
#define meta_l2c_is_unified() 0
#define meta_l2c_size() 0
#define meta_l2c_ways() 0
#define meta_l2c_linesize() 0
#define meta_l2c_revision() 0
#define meta_l2c_is_enabled() 0
#define _meta_l2c_pf_is_enabled() 0
#define meta_l2c_pf_is_enabled() 0
#define meta_l2c_disable() 1
#define meta_l2c_enable() 0
#define meta_l2c_pf_enable(X) 0
static inline int meta_l2c_flush(void)
{
return 1;
}
static inline int meta_l2c_writeback(void)
{
return 1;
}
#endif /* CONFIG_METAG_L2C */
#endif /* _METAG_L2CACHE_H */
#ifndef __ASM_LINKAGE_H
#define __ASM_LINKAGE_H
#define __ALIGN .p2align 2
#define __ALIGN_STR ".p2align 2"
#endif
/*
* arch/metag/include/asm/mach/arch.h
*
* Copyright (C) 2012 Imagination Technologies Ltd.
*
* based on the ARM version:
* Copyright (C) 2000 Russell King
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _METAG_MACH_ARCH_H_
#define _METAG_MACH_ARCH_H_
#include <linux/stddef.h>
#include <asm/clock.h>
/**
* struct machine_desc - Describes a board controlled by a Meta.
* @name: Board/SoC name.
* @dt_compat: Array of device tree 'compatible' strings.
* @clocks: Clock callbacks.
*
* @nr_irqs: Maximum number of IRQs.
* If 0, defaults to NR_IRQS in asm-generic/irq.h.
*
* @init_early: Early init callback.
* @init_irq: IRQ init callback for setting up IRQ controllers.
* @init_machine: Arch init callback for setting up devices.
* @init_late: Late init callback.
*
* This structure is provided by each board which can be controlled by a Meta.
* It is chosen by matching the compatible strings in the device tree provided
* by the bootloader with the strings in @dt_compat, and sets up any aspects of
* the machine that aren't configured with device tree (yet).
*/
struct machine_desc {
const char *name;
const char **dt_compat;
struct meta_clock_desc *clocks;
unsigned int nr_irqs;
void (*init_early)(void);
void (*init_irq)(void);
void (*init_machine)(void);
void (*init_late)(void);
};
/*
* Current machine - only accessible during boot.
*/
extern struct machine_desc *machine_desc;
/*
* Machine type table - also only accessible during boot
*/
extern struct machine_desc __arch_info_begin[], __arch_info_end[];
#define for_each_machine_desc(p) \
for (p = __arch_info_begin; p < __arch_info_end; p++)
static inline struct machine_desc *default_machine_desc(void)
{
/* the default machine is the last one linked in */
if (__arch_info_end - 1 < __arch_info_begin)
return NULL;
return __arch_info_end - 1;
}
/*
* Set of macros to define architecture features. This is built into
* a table by the linker.
*/
#define MACHINE_START(_type, _name) \
static const struct machine_desc __mach_desc_##_type \
__used \
__attribute__((__section__(".arch.info.init"))) = { \
.name = _name,
#define MACHINE_END \
};
#endif /* _METAG_MACH_ARCH_H_ */
/*
* asm/metag_isa.h
*
* Copyright (C) 2000-2007, 2012 Imagination Technologies.
*
* This program is free software; you can redistribute it and/or modify it under
* the terms of the GNU General Public License version 2 as published by the
* Free Software Foundation.
*
* Various defines for Meta instruction set.
*/
#ifndef _ASM_METAG_ISA_H_
#define _ASM_METAG_ISA_H_
/* L1 cache layout */
/* Data cache line size as bytes and shift */
#define DCACHE_LINE_BYTES 64
#define DCACHE_LINE_S 6
/* Number of ways in the data cache */
#define DCACHE_WAYS 4
/* Instruction cache line size as bytes and shift */
#define ICACHE_LINE_BYTES 64
#define ICACHE_LINE_S 6
/* Number of ways in the instruction cache */
#define ICACHE_WAYS 4
/*
* CACHEWD/CACHEWL instructions use the bottom 8 bits of the data presented to
* control the operation actually achieved.
*/
/* Use of these two bits should be discouraged since the bits dont have
* consistent meanings
*/
#define CACHEW_ICACHE_BIT 0x01
#define CACHEW_TLBFLUSH_BIT 0x02
#define CACHEW_FLUSH_L1D_L2 0x0
#define CACHEW_INVALIDATE_L1I 0x1
#define CACHEW_INVALIDATE_L1DTLB 0x2
#define CACHEW_INVALIDATE_L1ITLB 0x3
#define CACHEW_WRITEBACK_L1D_L2 0x4
#define CACHEW_INVALIDATE_L1D 0x8
#define CACHEW_INVALIDATE_L1D_L2 0xC
/*
* CACHERD/CACHERL instructions use bits 3:5 of the address presented to
* control the operation achieved and hence the specific result.
*/
#define CACHER_ADDR_BITS 0xFFFFFFC0
#define CACHER_OPER_BITS 0x00000030
#define CACHER_OPER_S 4
#define CACHER_OPER_LINPHY 0
#define CACHER_ICACHE_BIT 0x00000008
#define CACHER_ICACHE_S 3
/*
* CACHERD/CACHERL LINPHY Oper result is one/two 32-bit words
*
* If CRLINPHY0_VAL_BIT (Bit 0) set then,
* Lower 32-bits corresponds to MMCU_ENTRY_* above.
* Upper 32-bits corresponds to CRLINPHY1_* values below (if requested).
* else
* Lower 32-bits corresponds to CRLINPHY0_* values below.
* Upper 32-bits undefined.
*/
#define CRLINPHY0_VAL_BIT 0x00000001
#define CRLINPHY0_FIRST_BIT 0x00000004 /* Set if VAL=0 due to first level */
#define CRLINPHY1_READ_BIT 0x00000001 /* Set if reads permitted */
#define CRLINPHY1_SINGLE_BIT 0x00000004 /* Set if TLB does not cache entry */
#define CRLINPHY1_PAGEMSK_BITS 0x0000FFF0 /* Set to ((2^n-1)>>12) value */
#define CRLINPHY1_PAGEMSK_S 4
#endif /* _ASM_METAG_ISA_H_ */
This diff is collapsed.
This diff is collapsed.
#ifndef __METAG_MMAN_H__
#define __METAG_MMAN_H__
#include <uapi/asm/mman.h>
#ifndef __ASSEMBLY__
#define arch_mmap_check metag_mmap_check
int metag_mmap_check(unsigned long addr, unsigned long len,
unsigned long flags);
#endif
#endif /* __METAG_MMAN_H__ */
#ifndef __MMU_H
#define __MMU_H
#ifdef CONFIG_METAG_USER_TCM
#include <linux/list.h>
#endif
#ifdef CONFIG_HUGETLB_PAGE
#include <asm/page.h>
#endif
typedef struct {
/* Software pgd base pointer used for Meta 1.x MMU. */
unsigned long pgd_base;
#ifdef CONFIG_METAG_USER_TCM
struct list_head tcm;
#endif
#ifdef CONFIG_HUGETLB_PAGE
#if HPAGE_SHIFT < HUGEPT_SHIFT
/* last partially filled huge page table address */
unsigned long part_huge;
#endif
#endif
} mm_context_t;
/* Given a virtual address, return the pte for the top level 4meg entry
* that maps that address.
* Returns 0 (an empty pte) if that range is not mapped.
*/
unsigned long mmu_read_first_level_page(unsigned long vaddr);
/* Given a linear (virtual) address, return the second level 4k pte
* that maps that address. Returns 0 if the address is not mapped.
*/
unsigned long mmu_read_second_level_page(unsigned long vaddr);
/* Get the virtual base address of the MMU */
unsigned long mmu_get_base(void);
/* Initialize the MMU. */
void mmu_init(unsigned long mem_end);
#ifdef CONFIG_METAG_META21_MMU
/*
* For cpu "cpu" calculate and return the address of the
* MMCU_TnLOCAL_TABLE_PHYS0 if running in local-space or
* MMCU_TnGLOBAL_TABLE_PHYS0 if running in global-space.
*/
static inline unsigned long mmu_phys0_addr(unsigned int cpu)
{
unsigned long phys0;
phys0 = (MMCU_T0LOCAL_TABLE_PHYS0 +
(MMCU_TnX_TABLE_PHYSX_STRIDE * cpu)) +
(MMCU_TXG_TABLE_PHYSX_OFFSET * is_global_space(PAGE_OFFSET));
return phys0;
}
/*
* For cpu "cpu" calculate and return the address of the
* MMCU_TnLOCAL_TABLE_PHYS1 if running in local-space or
* MMCU_TnGLOBAL_TABLE_PHYS1 if running in global-space.
*/
static inline unsigned long mmu_phys1_addr(unsigned int cpu)
{
unsigned long phys1;
phys1 = (MMCU_T0LOCAL_TABLE_PHYS1 +
(MMCU_TnX_TABLE_PHYSX_STRIDE * cpu)) +
(MMCU_TXG_TABLE_PHYSX_OFFSET * is_global_space(PAGE_OFFSET));
return phys1;
}
#endif /* CONFIG_METAG_META21_MMU */
#endif
#ifndef __METAG_MMU_CONTEXT_H
#define __METAG_MMU_CONTEXT_H
#include <asm-generic/mm_hooks.h>
#include <asm/page.h>
#include <asm/mmu.h>
#include <asm/tlbflush.h>
#include <asm/cacheflush.h>
#include <linux/io.h>
static inline void enter_lazy_tlb(struct mm_struct *mm,
struct task_struct *tsk)
{
}
static inline int init_new_context(struct task_struct *tsk,
struct mm_struct *mm)
{
#ifndef CONFIG_METAG_META21_MMU
/* We use context to store a pointer to the page holding the
* pgd of a process while it is running. While a process is not
* running the pgd and context fields should be equal.
*/
mm->context.pgd_base = (unsigned long) mm->pgd;
#endif
#ifdef CONFIG_METAG_USER_TCM
INIT_LIST_HEAD(&mm->context.tcm);
#endif
return 0;
}
#ifdef CONFIG_METAG_USER_TCM
#include <linux/slab.h>
#include <asm/tcm.h>
static inline void destroy_context(struct mm_struct *mm)
{
struct tcm_allocation *pos, *n;
list_for_each_entry_safe(pos, n, &mm->context.tcm, list) {
tcm_free(pos->tag, pos->addr, pos->size);
list_del(&pos->list);
kfree(pos);
}
}
#else
#define destroy_context(mm) do { } while (0)
#endif
#ifdef CONFIG_METAG_META21_MMU
static inline void load_pgd(pgd_t *pgd, int thread)
{
unsigned long phys0 = mmu_phys0_addr(thread);
unsigned long phys1 = mmu_phys1_addr(thread);
/*
* 0x900 2Gb address space
* The permission bits apply to MMU table region which gives a 2MB
* window into physical memory. We especially don't want userland to be
* able to access this.
*/
metag_out32(0x900 | _PAGE_CACHEABLE | _PAGE_PRIV | _PAGE_WRITE |
_PAGE_PRESENT, phys0);
/* Set new MMU base address */
metag_out32(__pa(pgd) & MMCU_TBLPHYS1_ADDR_BITS, phys1);
}
#endif
static inline void switch_mmu(struct mm_struct *prev, struct mm_struct *next)
{
#ifdef CONFIG_METAG_META21_MMU
load_pgd(next->pgd, hard_processor_id());
#else
unsigned int i;
/* prev->context == prev->pgd in the case where we are initially
switching from the init task to the first process. */
if (prev->context.pgd_base != (unsigned long) prev->pgd) {
for (i = FIRST_USER_PGD_NR; i < USER_PTRS_PER_PGD; i++)
((pgd_t *) prev->context.pgd_base)[i] = prev->pgd[i];
} else
prev->pgd = (pgd_t *)mmu_get_base();
next->pgd = prev->pgd;
prev->pgd = (pgd_t *) prev->context.pgd_base;
for (i = FIRST_USER_PGD_NR; i < USER_PTRS_PER_PGD; i++)
next->pgd[i] = ((pgd_t *) next->context.pgd_base)[i];
flush_cache_all();
#endif
flush_tlb_all();
}
static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
{
if (prev != next)
switch_mmu(prev, next);
}
static inline void activate_mm(struct mm_struct *prev_mm,
struct mm_struct *next_mm)
{
switch_mmu(prev_mm, next_mm);
}
#define deactivate_mm(tsk, mm) do { } while (0)
#endif
#ifndef __ASM_METAG_MMZONE_H
#define __ASM_METAG_MMZONE_H
#ifdef CONFIG_NEED_MULTIPLE_NODES
#include <linux/numa.h>
extern struct pglist_data *node_data[];
#define NODE_DATA(nid) (node_data[nid])
static inline int pfn_to_nid(unsigned long pfn)
{
int nid;
for (nid = 0; nid < MAX_NUMNODES; nid++)
if (pfn >= node_start_pfn(nid) && pfn <= node_end_pfn(nid))
break;
return nid;
}
static inline struct pglist_data *pfn_to_pgdat(unsigned long pfn)
{
return NODE_DATA(pfn_to_nid(pfn));
}
/* arch/metag/mm/numa.c */
void __init setup_bootmem_node(int nid, unsigned long start, unsigned long end);
#else
static inline void
setup_bootmem_node(int nid, unsigned long start, unsigned long end)
{
}
#endif /* CONFIG_NEED_MULTIPLE_NODES */
#ifdef CONFIG_NUMA
/* SoC specific mem init */
void __init soc_mem_setup(void);
#else
static inline void __init soc_mem_setup(void) {};
#endif
#endif /* __ASM_METAG_MMZONE_H */
#ifndef _ASM_METAG_MODULE_H
#define _ASM_METAG_MODULE_H
#include <asm-generic/module.h>
struct metag_plt_entry {
/* Indirect jump instruction sequence. */
unsigned long tramp[2];
};
struct mod_arch_specific {
/* Indices of PLT sections within module. */
unsigned int core_plt_section, init_plt_section;
};
#if defined CONFIG_METAG_META12
#define MODULE_PROC_FAMILY "META 1.2 "
#elif defined CONFIG_METAG_META21
#define MODULE_PROC_FAMILY "META 2.1 "
#else
#define MODULE_PROC_FAMILY ""
#endif
#ifdef CONFIG_4KSTACKS
#define MODULE_STACKSIZE "4KSTACKS "
#else
#define MODULE_STACKSIZE ""
#endif
#define MODULE_ARCH_VERMAGIC MODULE_PROC_FAMILY MODULE_STACKSIZE
#ifdef MODULE
asm(".section .plt,\"ax\",@progbits; .balign 8; .previous");
asm(".section .init.plt,\"ax\",@progbits; .balign 8; .previous");
#endif
#endif /* _ASM_METAG_MODULE_H */
#ifndef _METAG_PAGE_H
#define _METAG_PAGE_H
#include <linux/const.h>
#include <asm/metag_mem.h>
/* PAGE_SHIFT determines the page size */
#if defined(CONFIG_PAGE_SIZE_4K)
#define PAGE_SHIFT 12
#elif defined(CONFIG_PAGE_SIZE_8K)
#define PAGE_SHIFT 13
#elif defined(CONFIG_PAGE_SIZE_16K)
#define PAGE_SHIFT 14
#endif
#define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT)
#define PAGE_MASK (~(PAGE_SIZE-1))
#if defined(CONFIG_HUGETLB_PAGE_SIZE_8K)
# define HPAGE_SHIFT 13
#elif defined(CONFIG_HUGETLB_PAGE_SIZE_16K)
# define HPAGE_SHIFT 14
#elif defined(CONFIG_HUGETLB_PAGE_SIZE_32K)
# define HPAGE_SHIFT 15
#elif defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
# define HPAGE_SHIFT 16
#elif defined(CONFIG_HUGETLB_PAGE_SIZE_128K)
# define HPAGE_SHIFT 17
#elif defined(CONFIG_HUGETLB_PAGE_SIZE_256K)
# define HPAGE_SHIFT 18
#elif defined(CONFIG_HUGETLB_PAGE_SIZE_512K)
# define HPAGE_SHIFT 19
#elif defined(CONFIG_HUGETLB_PAGE_SIZE_1M)
# define HPAGE_SHIFT 20
#elif defined(CONFIG_HUGETLB_PAGE_SIZE_2M)
# define HPAGE_SHIFT 21
#elif defined(CONFIG_HUGETLB_PAGE_SIZE_4M)
# define HPAGE_SHIFT 22
#endif
#ifdef CONFIG_HUGETLB_PAGE
# define HPAGE_SIZE (1UL << HPAGE_SHIFT)
# define HPAGE_MASK (~(HPAGE_SIZE-1))
# define HUGETLB_PAGE_ORDER (HPAGE_SHIFT-PAGE_SHIFT)
/*
* We define our own hugetlb_get_unmapped_area so we don't corrupt 2nd level
* page tables with normal pages in them.
*/
# define HUGEPT_SHIFT (22)
# define HUGEPT_ALIGN (1 << HUGEPT_SHIFT)
# define HUGEPT_MASK (HUGEPT_ALIGN - 1)
# define ALIGN_HUGEPT(x) ALIGN(x, HUGEPT_ALIGN)
# define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
#endif
#ifndef __ASSEMBLY__
/* On the Meta, we would like to know if the address (heap) we have is
* in local or global space.
*/
#define is_global_space(addr) ((addr) > 0x7fffffff)
#define is_local_space(addr) (!is_global_space(addr))
extern void clear_page(void *to);
extern void copy_page(void *to, void *from);
#define clear_user_page(page, vaddr, pg) clear_page(page)
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
/*
* These are used to make use of C type-checking..
*/
typedef struct { unsigned long pte; } pte_t;
typedef struct { unsigned long pgd; } pgd_t;
typedef struct { unsigned long pgprot; } pgprot_t;
typedef struct page *pgtable_t;
#define pte_val(x) ((x).pte)
#define pgd_val(x) ((x).pgd)
#define pgprot_val(x) ((x).pgprot)
#define __pte(x) ((pte_t) { (x) })
#define __pgd(x) ((pgd_t) { (x) })
#define __pgprot(x) ((pgprot_t) { (x) })
/* The kernel must now ALWAYS live at either 0xC0000000 or 0x40000000 - that
* being either global or local space.
*/
#define PAGE_OFFSET (CONFIG_PAGE_OFFSET)
#if PAGE_OFFSET >= LINGLOBAL_BASE
#define META_MEMORY_BASE LINGLOBAL_BASE
#define META_MEMORY_LIMIT LINGLOBAL_LIMIT
#else
#define META_MEMORY_BASE LINLOCAL_BASE
#define META_MEMORY_LIMIT LINLOCAL_LIMIT
#endif
/* Offset between physical and virtual mapping of kernel memory. */
extern unsigned int meta_memoffset;
#define __pa(x) ((unsigned long)(((unsigned long)(x)) - meta_memoffset))
#define __va(x) ((void *)((unsigned long)(((unsigned long)(x)) + meta_memoffset)))
extern unsigned long pfn_base;
#define ARCH_PFN_OFFSET (pfn_base)
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
#define page_to_virt(page) __va(page_to_pfn(page) << PAGE_SHIFT)
#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
#define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT)
#ifdef CONFIG_FLATMEM
extern unsigned long max_pfn;
extern unsigned long min_low_pfn;
#define pfn_valid(pfn) ((pfn) >= min_low_pfn && (pfn) < max_pfn)
#endif
#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
#define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | VM_EXEC | \
VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
#include <asm-generic/memory_model.h>
#include <asm-generic/getorder.h>
#endif /* __ASSMEBLY__ */
#endif /* _METAG_PAGE_H */
#ifndef __ASM_METAG_PERF_EVENT_H
#define __ASM_METAG_PERF_EVENT_H
#endif /* __ASM_METAG_PERF_EVENT_H */
#ifndef _METAG_PGALLOC_H
#define _METAG_PGALLOC_H
#include <linux/threads.h>
#include <linux/mm.h>
#define pmd_populate_kernel(mm, pmd, pte) \
set_pmd(pmd, __pmd(_PAGE_TABLE | __pa(pte)))
#define pmd_populate(mm, pmd, pte) \
set_pmd(pmd, __pmd(_PAGE_TABLE | page_to_phys(pte)))
#define pmd_pgtable(pmd) pmd_page(pmd)
/*
* Allocate and free page tables.
*/
#ifdef CONFIG_METAG_META21_MMU
static inline void pgd_ctor(pgd_t *pgd)
{
memcpy(pgd + USER_PTRS_PER_PGD,
swapper_pg_dir + USER_PTRS_PER_PGD,
(PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
}
#else
#define pgd_ctor(x) do { } while (0)
#endif
static inline pgd_t *pgd_alloc(struct mm_struct *mm)
{
pgd_t *pgd = (pgd_t *)get_zeroed_page(GFP_KERNEL);
if (pgd)
pgd_ctor(pgd);
return pgd;
}
static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
{
free_page((unsigned long)pgd);
}
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address)
{
pte_t *pte = (pte_t *)__get_free_page(GFP_KERNEL | __GFP_REPEAT |
__GFP_ZERO);
return pte;
}
static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
unsigned long address)
{
struct page *pte;
pte = alloc_pages(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO, 0);
if (pte)
pgtable_page_ctor(pte);
return pte;
}
static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
{
free_page((unsigned long)pte);
}
static inline void pte_free(struct mm_struct *mm, pgtable_t pte)
{
pgtable_page_dtor(pte);
__free_page(pte);
}
#define __pte_free_tlb(tlb, pte, addr) \
do { \
pgtable_page_dtor(pte); \
tlb_remove_page((tlb), (pte)); \
} while (0)
#define check_pgt_cache() do { } while (0)
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
#ifndef _ASM_METAG_SETUP_H
#define _ASM_METAG_SETUP_H
#include <uapi/asm/setup.h>
void per_cpu_trap_init(unsigned long);
extern void __init dump_machine_table(void);
#endif /* _ASM_METAG_SETUP_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
#include <linux/byteorder/little_endian.h>
This diff is collapsed.
#ifndef _UAPI_METAG_RESOURCE_H
#define _UAPI_METAG_RESOURCE_H
#define _STK_LIM_MAX (1 << 28)
#include <asm-generic/resource.h>
#endif /* _UAPI_METAG_RESOURCE_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment