Commit a579fcfa authored by Arnd Bergmann's avatar Arnd Bergmann

c6x: remove architecture

The c6x architecture was added to the kernel in 2011 at a time when
running Linux on DSPs was widely seen as the logical evolution.
It appears the trend has gone back to running Linux on Arm based SoCs
with DSP, using a better supported software ecosystem, and having better
real-time behavior for the DSP code. An example of this is TI's own
Keystone2 platform.

The upstream kernel port appears to no longer have any users. Mark
Salter remained avaialable to review patches, but mentioned that
he no longer has access to working hardware himself. Without any
users, it's best to just remove the code completely to reduce the
work for cross-architecture code changes.

Many thanks to Mark for maintaining the code for the past ten years.

Link: https://lore.kernel.org/lkml/41dc7795afda9f776d8cd0d3075f776cf586e97c.camel@redhat.com/Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
parent bd97ad35
C6X PLL Clock Controllers
-------------------------
This is a first-cut support for the SoC clock controllers. This is still
under development and will probably change as the common device tree
clock support is added to the kernel.
Required properties:
- compatible: "ti,c64x+pll"
May also have SoC-specific value to support SoC-specific initialization
in the driver. One of:
"ti,c6455-pll"
"ti,c6457-pll"
"ti,c6472-pll"
"ti,c6474-pll"
- reg: base address and size of register area
- clock-frequency: input clock frequency in hz
Optional properties:
- ti,c64x+pll-bypass-delay: CPU cycles to delay when entering bypass mode
- ti,c64x+pll-reset-delay: CPU cycles to delay after PLL reset
- ti,c64x+pll-lock-delay: CPU cycles to delay after PLL frequency change
Example:
clock-controller@29a0000 {
compatible = "ti,c6472-pll", "ti,c64x+pll";
reg = <0x029a0000 0x200>;
clock-frequency = <25000000>;
ti,c64x+pll-bypass-delay = <200>;
ti,c64x+pll-reset-delay = <12000>;
ti,c64x+pll-lock-delay = <80000>;
};
Device State Configuration Registers
------------------------------------
TI C6X SoCs contain a region of miscellaneous registers which provide various
function for SoC control or status. Details vary considerably among from SoC
to SoC with no two being alike.
In general, the Device State Configuration Registers (DSCR) will provide one or
more configuration registers often protected by a lock register where one or
more key values must be written to a lock register in order to unlock the
configuration register for writes. These configuration register may be used to
enable (and disable in some cases) SoC pin drivers, select peripheral clock
sources (internal or pin), etc. In some cases, a configuration register is
write once or the individual bits are write once. In addition to device config,
the DSCR block may provide registers which are used to reset peripherals,
provide device ID information, provide ethernet MAC addresses, as well as other
miscellaneous functions.
For device state control (enable/disable), each device control is assigned an
id which is used by individual device drivers to control the state as needed.
Required properties:
- compatible: must be "ti,c64x+dscr"
- reg: register area base and size
Optional properties:
NOTE: These are optional in that not all SoCs will have all properties. For
SoCs which do support a given property, leaving the property out of the
device tree will result in reduced functionality or possibly driver
failure.
- ti,dscr-devstat
offset of the devstat register
- ti,dscr-silicon-rev
offset, start bit, and bitsize of silicon revision field
- ti,dscr-rmii-resets
offset and bitmask of RMII reset field. May have multiple tuples if more
than one ethernet port is available.
- ti,dscr-locked-regs
possibly multiple tuples describing registers which are write protected by
a lock register. Each tuple consists of the register offset, lock register
offsset, and the key value used to unlock the register.
- ti,dscr-kick-regs
offset and key values of two "kick" registers used to write protect other
registers in DSCR. On SoCs using kick registers, the first key must be
written to the first kick register and the second key must be written to
the second register before other registers in the area are write-enabled.
- ti,dscr-mac-fuse-regs
MAC addresses are contained in two registers. Each element of a MAC address
is contained in a single byte. This property has two tuples. Each tuple has
a register offset and four cells representing bytes in the register from
most significant to least. The value of these four cells is the MAC byte
index (1-6) of the byte within the register. A value of 0 means the byte
is unused in the MAC address.
- ti,dscr-devstate-ctl-regs
This property describes the bitfields used to control the state of devices.
Each tuple describes a range of identical bitfields used to control one or
more devices (one bitfield per device). The layout of each tuple is:
start_id num_ids reg enable disable start_bit nbits
Where:
start_id is device id for the first device control in the range
num_ids is the number of device controls in the range
reg is the offset of the register holding the control bits
enable is the value to enable a device
disable is the value to disable a device (0xffffffff if cannot disable)
start_bit is the bit number of the first bit in the range
nbits is the number of bits per device control
- ti,dscr-devstate-stat-regs
This property describes the bitfields used to provide device state status
for device states controlled by the DSCR. Each tuple describes a range of
identical bitfields used to provide status for one or more devices (one
bitfield per device). The layout of each tuple is:
start_id num_ids reg enable disable start_bit nbits
Where:
start_id is device id for the first device status in the range
num_ids is the number of devices covered by the range
reg is the offset of the register holding the status bits
enable is the value indicating device is enabled
disable is the value indicating device is disabled
start_bit is the bit number of the first bit in the range
nbits is the number of bits per device status
- ti,dscr-privperm
Offset and default value for register used to set access privilege for
some SoC devices.
Example:
device-state-config-regs@2a80000 {
compatible = "ti,c64x+dscr";
reg = <0x02a80000 0x41000>;
ti,dscr-devstat = <0>;
ti,dscr-silicon-rev = <8 28 0xf>;
ti,dscr-rmii-resets = <0x40020 0x00040000>;
ti,dscr-locked-regs = <0x40008 0x40004 0x0f0a0b00>;
ti,dscr-devstate-ctl-regs =
<0 12 0x40008 1 0 0 2
12 1 0x40008 3 0 30 2
13 2 0x4002c 1 0xffffffff 0 1>;
ti,dscr-devstate-stat-regs =
<0 10 0x40014 1 0 0 3
10 2 0x40018 1 0 0 3>;
ti,dscr-mac-fuse-regs = <0x700 1 2 3 4
0x704 5 6 0 0>;
ti,dscr-privperm = <0x41c 0xaaaaaaaa>;
ti,dscr-kick-regs = <0x38 0x83E70B13
0x3c 0x95A4F1E0>;
};
External Memory Interface
-------------------------
The emifa node describes a simple external bus controller found on some C6X
SoCs. This interface provides external busses with a number of chip selects.
Required properties:
- compatible: must be "ti,c64x+emifa", "simple-bus"
- reg: register area base and size
- #address-cells: must be 2 (chip-select + offset)
- #size-cells: must be 1
- ranges: mapping from EMIFA space to parent space
Optional properties:
- ti,dscr-dev-enable: Device ID if EMIF is enabled/disabled from DSCR
- ti,emifa-burst-priority:
Number of memory transfers after which the EMIF will elevate the priority
of the oldest command in the command FIFO. Setting this field to 255
disables this feature, thereby allowing old commands to stay in the FIFO
indefinitely.
- ti,emifa-ce-config:
Configuration values for each of the supported chip selects.
Example:
emifa@70000000 {
compatible = "ti,c64x+emifa", "simple-bus";
#address-cells = <2>;
#size-cells = <1>;
reg = <0x70000000 0x100>;
ranges = <0x2 0x0 0xa0000000 0x00000008
0x3 0x0 0xb0000000 0x00400000
0x4 0x0 0xc0000000 0x10000000
0x5 0x0 0xD0000000 0x10000000>;
ti,dscr-dev-enable = <13>;
ti,emifa-burst-priority = <255>;
ti,emifa-ce-config = <0x00240120
0x00240120
0x00240122
0x00240122>;
flash@3,0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "cfi-flash";
reg = <0x3 0x0 0x400000>;
bank-width = <1>;
device-width = <1>;
partition@0 {
reg = <0x0 0x400000>;
label = "NOR";
};
};
};
This shows a flash chip attached to chip select 3.
C6X System-on-Chip
------------------
Required properties:
- compatible: "simple-bus"
- #address-cells: must be 1
- #size-cells: must be 1
- ranges
Optional properties:
- model: specific SoC model
- nodes for IP blocks within SoC
Example:
soc {
compatible = "simple-bus";
model = "tms320c6455";
#address-cells = <1>;
#size-cells = <1>;
ranges;
...
};
C6X Interrupt Chips
-------------------
* C64X+ Core Interrupt Controller
The core interrupt controller provides 16 prioritized interrupts to the
C64X+ core. Priority 0 and 1 are used for reset and NMI respectively.
Priority 2 and 3 are reserved. Priority 4-15 are used for interrupt
sources coming from outside the core.
Required properties:
--------------------
- compatible: Should be "ti,c64x+core-pic";
- #interrupt-cells: <1>
Interrupt Specifier Definition
------------------------------
Single cell specifying the core interrupt priority level (4-15) where
4 is highest priority and 15 is lowest priority.
Example
-------
core_pic: interrupt-controller@0 {
interrupt-controller;
#interrupt-cells = <1>;
compatible = "ti,c64x+core-pic";
};
* C64x+ Megamodule Interrupt Controller
The megamodule PIC consists of four interrupt mupliplexers each of which
combine up to 32 interrupt inputs into a single interrupt output which
may be cascaded into the core interrupt controller. The megamodule PIC
has a total of 12 outputs cascading into the core interrupt controller.
One for each core interrupt priority level. In addition to the combined
interrupt sources, individual megamodule interrupts may be cascaded to
the core interrupt controller. When an individual interrupt is cascaded,
it is no longer handled through a megamodule interrupt combiner and is
considered to have the core interrupt controller as the parent.
Required properties:
--------------------
- compatible: "ti,c64x+megamod-pic"
- interrupt-controller
- #interrupt-cells: <1>
- reg: base address and size of register area
- interrupts: This should have four cells; one for each interrupt combiner.
The cells contain the core priority interrupt to which the
corresponding combiner output is wired.
Optional properties:
--------------------
- ti,c64x+megamod-pic-mux: Array of 12 cells correspnding to the 12 core
priority interrupts. The first cell corresponds to
core priority 4 and the last cell corresponds to
core priority 15. The value of each cell is the
megamodule interrupt source which is MUXed to
the core interrupt corresponding to the cell
position. Allowed values are 4 - 127. Mapping for
interrupts 0 - 3 (combined interrupt sources) are
ignored.
Interrupt Specifier Definition
------------------------------
Single cell specifying the megamodule interrupt source (4-127). Note that
interrupts mapped directly to the core with "ti,c64x+megamod-pic-mux" will
use the core interrupt controller as their parent and the specifier will
be the core priority level, not the megamodule interrupt number.
Examples
--------
megamod_pic: interrupt-controller@1800000 {
compatible = "ti,c64x+megamod-pic";
interrupt-controller;
#interrupt-cells = <1>;
reg = <0x1800000 0x1000>;
interrupt-parent = <&core_pic>;
interrupts = < 12 13 14 15 >;
};
This is a minimal example where all individual interrupts go through a
combiner. Combiner-0 is mapped to core interrupt 12, combiner-1 is mapped
to interrupt 13, etc.
megamod_pic: interrupt-controller@1800000 {
compatible = "ti,c64x+megamod-pic";
interrupt-controller;
#interrupt-cells = <1>;
reg = <0x1800000 0x1000>;
interrupt-parent = <&core_pic>;
interrupts = < 12 13 14 15 >;
ti,c64x+megamod-pic-mux = < 0 0 0 0
32 0 0 0
0 0 0 0 >;
};
This the same as the first example except that megamodule interrupt 32 is
mapped directly to core priority interrupt 8. The node using this interrupt
must set the core controller as its interrupt parent and use 8 in the
interrupt specifier value.
Timer64
-------
The timer64 node describes C6X event timers.
Required properties:
- compatible: must be "ti,c64x+timer64"
- reg: base address and size of register region
- interrupts: interrupt id
Optional properties:
- ti,dscr-dev-enable: Device ID used to enable timer IP through DSCR interface.
- ti,core-mask: on multi-core SoCs, bitmask of cores allowed to use this timer.
Example:
timer0: timer@25e0000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x01 >;
reg = <0x25e0000 0x40>;
interrupt-parent = <&megamod_pic>;
interrupts = < 16 >;
};
...@@ -3837,14 +3837,6 @@ F: drivers/irqchip/irq-csky-* ...@@ -3837,14 +3837,6 @@ F: drivers/irqchip/irq-csky-*
N: csky N: csky
K: csky K: csky
C6X ARCHITECTURE
M: Mark Salter <msalter@redhat.com>
M: Aurelien Jacquiot <jacquiot.aurelien@gmail.com>
L: linux-c6x-dev@linux-c6x.org
S: Maintained
W: http://www.linux-c6x.org/wiki/index.php/Main_Page
F: arch/c6x/
CA8210 IEEE-802.15.4 RADIO DRIVER CA8210 IEEE-802.15.4 RADIO DRIVER
M: Harry Morris <h.morris@cascoda.com> M: Harry Morris <h.morris@cascoda.com>
L: linux-wpan@vger.kernel.org L: linux-wpan@vger.kernel.org
......
# SPDX-License-Identifier: GPL-2.0
#
# For a description of the syntax of this configuration file,
# see Documentation/kbuild/kconfig-language.rst.
#
config C6X
def_bool y
select ARCH_32BIT_OFF_T
select ARCH_HAS_BINFMT_FLAT
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select CLKDEV_LOOKUP
select HAVE_LEGACY_CLK
select GENERIC_ATOMIC64
select GENERIC_IRQ_SHOW
select HAVE_ARCH_TRACEHOOK
select SPARSE_IRQ
select IRQ_DOMAIN
select OF
select OF_EARLY_FLATTREE
select MODULES_USE_ELF_RELA
select MMU_GATHER_NO_RANGE if MMU
select SET_FS
config MMU
def_bool n
config FPU
def_bool n
config GENERIC_CALIBRATE_DELAY
def_bool y
config GENERIC_HWEIGHT
def_bool y
config GENERIC_BUG
def_bool y
depends on BUG
config C6X_BIG_KERNEL
bool "Build a big kernel"
help
The C6X function call instruction has a limited range of +/- 2MiB.
This is sufficient for most kernels, but some kernel configurations
with lots of compiled-in functionality may require a larger range
for function calls. Use this option to have the compiler generate
function calls with 32-bit range. This will make the kernel both
larger and slower.
If unsure, say N.
# Use the generic interrupt handling code in kernel/irq/
config CMDLINE_BOOL
bool "Default bootloader kernel arguments"
config CMDLINE
string "Kernel command line"
depends on CMDLINE_BOOL
default "console=ttyS0,57600"
help
On some architectures there is currently no way for the boot loader
to pass arguments to the kernel. For these architectures, you should
supply some command-line options at build time by entering them
here.
config CMDLINE_FORCE
bool "Force default kernel command string"
depends on CMDLINE_BOOL
default n
help
Set this to have arguments from the default kernel command string
override those passed by the boot loader.
config CPU_BIG_ENDIAN
bool "Build big-endian kernel"
default n
help
Say Y if you plan on running a kernel in big-endian mode.
Note that your board must be properly built and your board
port must properly enable any big-endian related features
of your chipset/board/processor.
config FORCE_MAX_ZONEORDER
int "Maximum zone order"
default "13"
help
The kernel memory allocator divides physically contiguous memory
blocks into "zones", where each zone is a power of two number of
pages. This option selects the largest power of two that the kernel
keeps in the memory allocator. If you need to allocate very large
blocks of physically contiguous memory, then you may need to
increase this value.
This config option is actually maximum order plus one. For example,
a value of 11 means that the largest free memory block is 2^10 pages.
menu "Processor type and features"
source "arch/c6x/platforms/Kconfig"
config KERNEL_RAM_BASE_ADDRESS
hex "Virtual address of memory base"
default 0xe0000000 if SOC_TMS320C6455
default 0xe0000000 if SOC_TMS320C6457
default 0xe0000000 if SOC_TMS320C6472
default 0x80000000
source "kernel/Kconfig.hz"
endmenu
# SPDX-License-Identifier: GPL-2.0
config ACCESS_CHECK
bool "Check the user pointer address"
default y
help
Usually the pointer transfer from user space is checked to see if its
address is in the kernel space.
Say N here to disable that check to improve the performance.
#
# linux/arch/c6x/Makefile
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
KBUILD_DEFCONFIG := dsk6455_defconfig
cflags-y += -mno-dsbt -msdata=none -D__linux__
cflags-$(CONFIG_C6X_BIG_KERNEL) += -mlong-calls
KBUILD_CFLAGS_MODULE += -mlong-calls -mno-dsbt -msdata=none
CHECKFLAGS +=
KBUILD_CFLAGS += $(cflags-y)
KBUILD_AFLAGS += $(cflags-y)
ifdef CONFIG_CPU_BIG_ENDIAN
KBUILD_CFLAGS += -mbig-endian
KBUILD_AFLAGS += -mbig-endian
LINKFLAGS += -mbig-endian
KBUILD_LDFLAGS += -mbig-endian -EB
CHECKFLAGS += -D_BIG_ENDIAN
endif
head-y := arch/c6x/kernel/head.o
core-y += arch/c6x/kernel/ arch/c6x/mm/ arch/c6x/platforms/
libs-y += arch/c6x/lib/
# Default to vmlinux.bin, override when needed
all: vmlinux.bin
boot := arch/$(ARCH)/boot
# Are we making a dtbImage.<boardname> target? If so, crack out the boardname
DTB:=$(subst dtbImage.,,$(filter dtbImage.%, $(MAKECMDGOALS)))
export DTB
core-y += $(boot)/dts/
# With make 3.82 we cannot mix normal and wildcard targets
vmlinux.bin: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(patsubst %,$(boot)/%,$@)
dtbImage.%: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(patsubst %,$(boot)/%,$@)
archclean:
$(Q)$(MAKE) $(clean)=$(boot)
define archhelp
@echo ' vmlinux.bin - Binary kernel image (arch/$(ARCH)/boot/vmlinux.bin)'
@echo ' dtbImage.<dt> - ELF image with $(arch)/boot/dts/<dt>.dts linked in'
@echo ' - stripped elf with fdt blob'
endef
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for bootable kernel images
#
OBJCOPYFLAGS_vmlinux.bin := -O binary
$(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy)
$(obj)/dtbImage.%: vmlinux
$(call if_changed,objcopy)
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for device trees
#
DTC_FLAGS ?= -p 1024
dtb-$(CONFIG_SOC_TMS320C6455) += dsk6455.dtb
dtb-$(CONFIG_SOC_TMS320C6457) += evmc6457.dtb
dtb-$(CONFIG_SOC_TMS320C6472) += evmc6472.dtb
dtb-$(CONFIG_SOC_TMS320C6474) += evmc6474.dtb
dtb-$(CONFIG_SOC_TMS320C6678) += evmc6678.dtb
ifneq ($(DTB),)
obj-y += $(DTB).dtb.o
endif
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* arch/c6x/boot/dts/dsk6455.dts
*
* DSK6455 Evaluation Platform For TMS320C6455
* Copyright (C) 2011 Texas Instruments Incorporated
*
* Author: Mark Salter <msalter@redhat.com>
*/
/dts-v1/;
/include/ "tms320c6455.dtsi"
/ {
model = "Spectrum Digital DSK6455";
compatible = "spectrum-digital,dsk6455";
chosen {
bootargs = "root=/dev/nfs ip=dhcp rw";
};
memory {
device_type = "memory";
reg = <0xE0000000 0x08000000>;
};
soc {
megamod_pic: interrupt-controller@1800000 {
interrupts = < 12 13 14 15 >;
};
emifa@70000000 {
flash@3,0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "cfi-flash";
reg = <0x3 0x0 0x400000>;
bank-width = <1>;
device-width = <1>;
partition@0 {
reg = <0x0 0x400000>;
label = "NOR";
};
};
};
timer1: timer@2980000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 69 >;
};
clock-controller@029a0000 {
clock-frequency = <50000000>;
};
};
};
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* arch/c6x/boot/dts/evmc6457.dts
*
* EVMC6457 Evaluation Platform For TMS320C6457
*
* Copyright (C) 2011 Texas Instruments Incorporated
*
* Author: Mark Salter <msalter@redhat.com>
*/
/dts-v1/;
/include/ "tms320c6457.dtsi"
/ {
model = "eInfochips EVMC6457";
compatible = "einfochips,evmc6457";
chosen {
bootargs = "console=hvc root=/dev/nfs ip=dhcp rw";
};
memory {
device_type = "memory";
reg = <0xE0000000 0x10000000>;
};
soc {
megamod_pic: interrupt-controller@1800000 {
interrupts = < 12 13 14 15 >;
};
timer0: timer@2940000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 67 >;
};
clock-controller@29a0000 {
clock-frequency = <60000000>;
};
};
};
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* arch/c6x/boot/dts/evmc6472.dts
*
* EVMC6472 Evaluation Platform For TMS320C6472
*
* Copyright (C) 2011 Texas Instruments Incorporated
*
* Author: Mark Salter <msalter@redhat.com>
*/
/dts-v1/;
/include/ "tms320c6472.dtsi"
/ {
model = "eInfochips EVMC6472";
compatible = "einfochips,evmc6472";
chosen {
bootargs = "console=hvc root=/dev/nfs ip=dhcp rw";
};
memory {
device_type = "memory";
reg = <0xE0000000 0x10000000>;
};
soc {
megamod_pic: interrupt-controller@1800000 {
interrupts = < 12 13 14 15 >;
};
timer0: timer@25e0000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 16 >;
};
timer1: timer@25f0000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 16 >;
};
timer2: timer@2600000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 16 >;
};
timer3: timer@2610000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 16 >;
};
timer4: timer@2620000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 16 >;
};
timer5: timer@2630000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 16 >;
};
clock-controller@29a0000 {
clock-frequency = <25000000>;
};
};
};
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* arch/c6x/boot/dts/evmc6474.dts
*
* EVMC6474 Evaluation Platform For TMS320C6474
*
* Copyright (C) 2011 Texas Instruments Incorporated
*
* Author: Mark Salter <msalter@redhat.com>
*/
/dts-v1/;
/include/ "tms320c6474.dtsi"
/ {
model = "Spectrum Digital EVMC6474";
compatible = "spectrum-digital,evmc6474";
chosen {
bootargs = "console=hvc root=/dev/nfs ip=dhcp rw";
};
memory {
device_type = "memory";
reg = <0x80000000 0x08000000>;
};
soc {
megamod_pic: interrupt-controller@1800000 {
interrupts = < 12 13 14 15 >;
};
timer3: timer@2940000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 39 >;
};
timer4: timer@2950000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 41 >;
};
timer5: timer@2960000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 43 >;
};
clock-controller@29a0000 {
clock-frequency = <50000000>;
};
};
};
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* arch/c6x/boot/dts/evmc6678.dts
*
* EVMC6678 Evaluation Platform For TMS320C6678
*
* Copyright (C) 2012 Texas Instruments Incorporated
*
* Author: Ken Cox <jkc@redhat.com>
*/
/dts-v1/;
/include/ "tms320c6678.dtsi"
/ {
model = "Advantech EVMC6678";
compatible = "advantech,evmc6678";
chosen {
bootargs = "root=/dev/nfs ip=dhcp rw";
};
memory {
device_type = "memory";
reg = <0x80000000 0x20000000>;
};
soc {
megamod_pic: interrupt-controller@1800000 {
interrupts = < 12 13 14 15 >;
};
timer8: timer@2280000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 66 >;
};
timer9: timer@2290000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 68 >;
};
timer10: timer@22A0000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 70 >;
};
timer11: timer@22B0000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 72 >;
};
timer12: timer@22C0000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 74 >;
};
timer13: timer@22D0000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 76 >;
};
timer14: timer@22E0000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 78 >;
};
timer15: timer@22F0000 {
interrupt-parent = <&megamod_pic>;
interrupts = < 80 >;
};
clock-controller@2310000 {
clock-frequency = <100000000>;
};
};
};
// SPDX-License-Identifier: GPL-2.0
/ {
#address-cells = <1>;
#size-cells = <1>;
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu@0 {
device_type = "cpu";
model = "ti,c64x+";
reg = <0>;
};
};
soc {
compatible = "simple-bus";
model = "tms320c6455";
#address-cells = <1>;
#size-cells = <1>;
ranges;
core_pic: interrupt-controller {
interrupt-controller;
#interrupt-cells = <1>;
compatible = "ti,c64x+core-pic";
};
/*
* Megamodule interrupt controller
*/
megamod_pic: interrupt-controller@1800000 {
compatible = "ti,c64x+megamod-pic";
interrupt-controller;
#interrupt-cells = <1>;
reg = <0x1800000 0x1000>;
interrupt-parent = <&core_pic>;
};
cache-controller@1840000 {
compatible = "ti,c64x+cache";
reg = <0x01840000 0x8400>;
};
emifa@70000000 {
compatible = "ti,c64x+emifa", "simple-bus";
#address-cells = <2>;
#size-cells = <1>;
reg = <0x70000000 0x100>;
ranges = <0x2 0x0 0xa0000000 0x00000008
0x3 0x0 0xb0000000 0x00400000
0x4 0x0 0xc0000000 0x10000000
0x5 0x0 0xD0000000 0x10000000>;
ti,dscr-dev-enable = <13>;
ti,emifa-burst-priority = <255>;
ti,emifa-ce-config = <0x00240120
0x00240120
0x00240122
0x00240122>;
};
timer1: timer@2980000 {
compatible = "ti,c64x+timer64";
reg = <0x2980000 0x40>;
ti,dscr-dev-enable = <4>;
};
clock-controller@029a0000 {
compatible = "ti,c6455-pll", "ti,c64x+pll";
reg = <0x029a0000 0x200>;
ti,c64x+pll-bypass-delay = <1440>;
ti,c64x+pll-reset-delay = <15360>;
ti,c64x+pll-lock-delay = <24000>;
};
device-state-config-regs@2a80000 {
compatible = "ti,c64x+dscr";
reg = <0x02a80000 0x41000>;
ti,dscr-devstat = <0>;
ti,dscr-silicon-rev = <8 28 0xf>;
ti,dscr-rmii-resets = <0 0x40020 0x00040000>;
ti,dscr-locked-regs = <0x40008 0x40004 0x0f0a0b00>;
ti,dscr-devstate-ctl-regs =
<0 12 0x40008 1 0 0 2
12 1 0x40008 3 0 30 2
13 2 0x4002c 1 0xffffffff 0 1>;
ti,dscr-devstate-stat-regs =
<0 10 0x40014 1 0 0 3
10 2 0x40018 1 0 0 3>;
};
};
};
// SPDX-License-Identifier: GPL-2.0
/ {
#address-cells = <1>;
#size-cells = <1>;
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu@0 {
device_type = "cpu";
model = "ti,c64x+";
reg = <0>;
};
};
soc {
compatible = "simple-bus";
model = "tms320c6457";
#address-cells = <1>;
#size-cells = <1>;
ranges;
core_pic: interrupt-controller {
interrupt-controller;
#interrupt-cells = <1>;
compatible = "ti,c64x+core-pic";
};
megamod_pic: interrupt-controller@1800000 {
compatible = "ti,c64x+megamod-pic";
interrupt-controller;
#interrupt-cells = <1>;
interrupt-parent = <&core_pic>;
reg = <0x1800000 0x1000>;
};
cache-controller@1840000 {
compatible = "ti,c64x+cache";
reg = <0x01840000 0x8400>;
};
device-state-controller@2880800 {
compatible = "ti,c64x+dscr";
reg = <0x02880800 0x400>;
ti,dscr-devstat = <0x20>;
ti,dscr-silicon-rev = <0x18 28 0xf>;
ti,dscr-mac-fuse-regs = <0x114 3 4 5 6
0x118 0 0 1 2>;
ti,dscr-kick-regs = <0x38 0x83E70B13
0x3c 0x95A4F1E0>;
};
timer0: timer@2940000 {
compatible = "ti,c64x+timer64";
reg = <0x2940000 0x40>;
};
clock-controller@29a0000 {
compatible = "ti,c6457-pll", "ti,c64x+pll";
reg = <0x029a0000 0x200>;
ti,c64x+pll-bypass-delay = <300>;
ti,c64x+pll-reset-delay = <24000>;
ti,c64x+pll-lock-delay = <50000>;
};
};
};
// SPDX-License-Identifier: GPL-2.0
/ {
#address-cells = <1>;
#size-cells = <1>;
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu@0 {
device_type = "cpu";
reg = <0>;
model = "ti,c64x+";
};
cpu@1 {
device_type = "cpu";
reg = <1>;
model = "ti,c64x+";
};
cpu@2 {
device_type = "cpu";
reg = <2>;
model = "ti,c64x+";
};
cpu@3 {
device_type = "cpu";
reg = <3>;
model = "ti,c64x+";
};
cpu@4 {
device_type = "cpu";
reg = <4>;
model = "ti,c64x+";
};
cpu@5 {
device_type = "cpu";
reg = <5>;
model = "ti,c64x+";
};
};
soc {
compatible = "simple-bus";
model = "tms320c6472";
#address-cells = <1>;
#size-cells = <1>;
ranges;
core_pic: interrupt-controller {
compatible = "ti,c64x+core-pic";
interrupt-controller;
#interrupt-cells = <1>;
};
megamod_pic: interrupt-controller@1800000 {
compatible = "ti,c64x+megamod-pic";
interrupt-controller;
#interrupt-cells = <1>;
reg = <0x1800000 0x1000>;
interrupt-parent = <&core_pic>;
};
cache-controller@1840000 {
compatible = "ti,c64x+cache";
reg = <0x01840000 0x8400>;
};
timer0: timer@25e0000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x01 >;
reg = <0x25e0000 0x40>;
};
timer1: timer@25f0000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x02 >;
reg = <0x25f0000 0x40>;
};
timer2: timer@2600000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x04 >;
reg = <0x2600000 0x40>;
};
timer3: timer@2610000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x08 >;
reg = <0x2610000 0x40>;
};
timer4: timer@2620000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x10 >;
reg = <0x2620000 0x40>;
};
timer5: timer@2630000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x20 >;
reg = <0x2630000 0x40>;
};
clock-controller@29a0000 {
compatible = "ti,c6472-pll", "ti,c64x+pll";
reg = <0x029a0000 0x200>;
ti,c64x+pll-bypass-delay = <200>;
ti,c64x+pll-reset-delay = <12000>;
ti,c64x+pll-lock-delay = <80000>;
};
device-state-controller@2a80000 {
compatible = "ti,c64x+dscr";
reg = <0x02a80000 0x1000>;
ti,dscr-devstat = <0>;
ti,dscr-silicon-rev = <0x70c 16 0xff>;
ti,dscr-mac-fuse-regs = <0x700 1 2 3 4
0x704 5 6 0 0>;
ti,dscr-rmii-resets = <0x208 1
0x20c 1>;
ti,dscr-locked-regs = <0x200 0x204 0x0a1e183a
0x40c 0x420 0xbea7
0x41c 0x420 0xbea7>;
ti,dscr-privperm = <0x41c 0xaaaaaaaa>;
ti,dscr-devstate-ctl-regs = <0 13 0x200 1 0 0 1>;
};
};
};
// SPDX-License-Identifier: GPL-2.0
/ {
#address-cells = <1>;
#size-cells = <1>;
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu@0 {
device_type = "cpu";
reg = <0>;
model = "ti,c64x+";
};
cpu@1 {
device_type = "cpu";
reg = <1>;
model = "ti,c64x+";
};
cpu@2 {
device_type = "cpu";
reg = <2>;
model = "ti,c64x+";
};
};
soc {
compatible = "simple-bus";
model = "tms320c6474";
#address-cells = <1>;
#size-cells = <1>;
ranges;
core_pic: interrupt-controller {
interrupt-controller;
#interrupt-cells = <1>;
compatible = "ti,c64x+core-pic";
};
megamod_pic: interrupt-controller@1800000 {
compatible = "ti,c64x+megamod-pic";
interrupt-controller;
#interrupt-cells = <1>;
reg = <0x1800000 0x1000>;
interrupt-parent = <&core_pic>;
};
cache-controller@1840000 {
compatible = "ti,c64x+cache";
reg = <0x01840000 0x8400>;
};
timer3: timer@2940000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x04 >;
reg = <0x2940000 0x40>;
};
timer4: timer@2950000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x02 >;
reg = <0x2950000 0x40>;
};
timer5: timer@2960000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x01 >;
reg = <0x2960000 0x40>;
};
device-state-controller@2880800 {
compatible = "ti,c64x+dscr";
reg = <0x02880800 0x400>;
ti,dscr-devstat = <0x004>;
ti,dscr-silicon-rev = <0x014 28 0xf>;
ti,dscr-mac-fuse-regs = <0x34 3 4 5 6
0x38 0 0 1 2>;
};
clock-controller@29a0000 {
compatible = "ti,c6474-pll", "ti,c64x+pll";
reg = <0x029a0000 0x200>;
ti,c64x+pll-bypass-delay = <120>;
ti,c64x+pll-reset-delay = <30000>;
ti,c64x+pll-lock-delay = <60000>;
};
};
};
// SPDX-License-Identifier: GPL-2.0
/ {
#address-cells = <1>;
#size-cells = <1>;
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu@0 {
device_type = "cpu";
reg = <0>;
model = "ti,c66x";
};
cpu@1 {
device_type = "cpu";
reg = <1>;
model = "ti,c66x";
};
cpu@2 {
device_type = "cpu";
reg = <2>;
model = "ti,c66x";
};
cpu@3 {
device_type = "cpu";
reg = <3>;
model = "ti,c66x";
};
cpu@4 {
device_type = "cpu";
reg = <4>;
model = "ti,c66x";
};
cpu@5 {
device_type = "cpu";
reg = <5>;
model = "ti,c66x";
};
cpu@6 {
device_type = "cpu";
reg = <6>;
model = "ti,c66x";
};
cpu@7 {
device_type = "cpu";
reg = <7>;
model = "ti,c66x";
};
};
soc {
compatible = "simple-bus";
model = "tms320c6678";
#address-cells = <1>;
#size-cells = <1>;
ranges;
core_pic: interrupt-controller {
compatible = "ti,c64x+core-pic";
interrupt-controller;
#interrupt-cells = <1>;
};
megamod_pic: interrupt-controller@1800000 {
compatible = "ti,c64x+megamod-pic";
interrupt-controller;
#interrupt-cells = <1>;
reg = <0x1800000 0x1000>;
interrupt-parent = <&core_pic>;
};
cache-controller@1840000 {
compatible = "ti,c64x+cache";
reg = <0x01840000 0x8400>;
};
timer8: timer@2280000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x01 >;
reg = <0x2280000 0x40>;
};
timer9: timer@2290000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x02 >;
reg = <0x2290000 0x40>;
};
timer10: timer@22A0000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x04 >;
reg = <0x22A0000 0x40>;
};
timer11: timer@22B0000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x08 >;
reg = <0x22B0000 0x40>;
};
timer12: timer@22C0000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x10 >;
reg = <0x22C0000 0x40>;
};
timer13: timer@22D0000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x20 >;
reg = <0x22D0000 0x40>;
};
timer14: timer@22E0000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x40 >;
reg = <0x22E0000 0x40>;
};
timer15: timer@22F0000 {
compatible = "ti,c64x+timer64";
ti,core-mask = < 0x80 >;
reg = <0x22F0000 0x40>;
};
clock-controller@2310000 {
compatible = "ti,c6678-pll", "ti,c64x+pll";
reg = <0x02310000 0x200>;
ti,c64x+pll-bypass-delay = <200>;
ti,c64x+pll-reset-delay = <12000>;
ti,c64x+pll-lock-delay = <80000>;
};
device-state-controller@2620000 {
compatible = "ti,c64x+dscr";
reg = <0x02620000 0x1000>;
ti,dscr-devstat = <0x20>;
ti,dscr-silicon-rev = <0x18 28 0xf>;
ti,dscr-mac-fuse-regs = <0x110 1 2 3 4
0x114 5 6 0 0>;
};
};
};
CONFIG_SOC_TMS320C6455=y
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_SYSVIPC=y
CONFIG_SPARSE_IRQ=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_NAMESPACES=y
# CONFIG_UTS_NS is not set
# CONFIG_USER_NS is not set
# CONFIG_PID_NS is not set
CONFIG_BLK_DEV_INITRD=y
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_EXPERT=y
# CONFIG_FUTEX is not set
# CONFIG_SLUB_DEBUG is not set
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE=""
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=2
CONFIG_BLK_DEV_RAM_SIZE=17000
# CONFIG_INPUT is not set
# CONFIG_SERIO is not set
# CONFIG_VT is not set
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
# CONFIG_USB_SUPPORT is not set
# CONFIG_IOMMU_SUPPORT is not set
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_CRC16=y
# CONFIG_ENABLE_MUST_CHECK is not set
# CONFIG_SCHED_DEBUG is not set
# CONFIG_DEBUG_BUGVERBOSE is not set
CONFIG_MTD=y
CONFIG_MTD_CFI=y
CONFIG_MTD_CFI_AMDSTD=y
CONFIG_MTD_PHYSMAP_OF=y
CONFIG_SOC_TMS320C6457=y
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_SYSVIPC=y
CONFIG_SPARSE_IRQ=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_NAMESPACES=y
# CONFIG_UTS_NS is not set
# CONFIG_USER_NS is not set
# CONFIG_PID_NS is not set
CONFIG_BLK_DEV_INITRD=y
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_EXPERT=y
# CONFIG_FUTEX is not set
# CONFIG_SLUB_DEBUG is not set
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE=""
CONFIG_BOARD_EVM6457=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=2
CONFIG_BLK_DEV_RAM_SIZE=17000
# CONFIG_INPUT is not set
# CONFIG_SERIO is not set
# CONFIG_VT is not set
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
# CONFIG_USB_SUPPORT is not set
# CONFIG_IOMMU_SUPPORT is not set
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_CRC16=y
# CONFIG_ENABLE_MUST_CHECK is not set
# CONFIG_SCHED_DEBUG is not set
# CONFIG_DEBUG_BUGVERBOSE is not set
CONFIG_SOC_TMS320C6472=y
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_SYSVIPC=y
CONFIG_SPARSE_IRQ=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_NAMESPACES=y
# CONFIG_UTS_NS is not set
# CONFIG_USER_NS is not set
# CONFIG_PID_NS is not set
CONFIG_BLK_DEV_INITRD=y
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_EXPERT=y
# CONFIG_FUTEX is not set
# CONFIG_SLUB_DEBUG is not set
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE=""
# CONFIG_CMDLINE_FORCE is not set
CONFIG_BOARD_EVM6472=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=2
CONFIG_BLK_DEV_RAM_SIZE=17000
# CONFIG_INPUT is not set
# CONFIG_SERIO is not set
# CONFIG_VT is not set
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
# CONFIG_USB_SUPPORT is not set
# CONFIG_IOMMU_SUPPORT is not set
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_CRC16=y
# CONFIG_ENABLE_MUST_CHECK is not set
# CONFIG_SCHED_DEBUG is not set
# CONFIG_DEBUG_BUGVERBOSE is not set
CONFIG_SOC_TMS320C6474=y
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_SYSVIPC=y
CONFIG_SPARSE_IRQ=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_NAMESPACES=y
# CONFIG_UTS_NS is not set
# CONFIG_USER_NS is not set
# CONFIG_PID_NS is not set
CONFIG_BLK_DEV_INITRD=y
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_EXPERT=y
# CONFIG_FUTEX is not set
# CONFIG_SLUB_DEBUG is not set
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE=""
# CONFIG_CMDLINE_FORCE is not set
CONFIG_BOARD_EVM6474=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=2
CONFIG_BLK_DEV_RAM_SIZE=17000
# CONFIG_INPUT is not set
# CONFIG_SERIO is not set
# CONFIG_VT is not set
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
# CONFIG_USB_SUPPORT is not set
# CONFIG_IOMMU_SUPPORT is not set
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_CRC16=y
# CONFIG_ENABLE_MUST_CHECK is not set
# CONFIG_SCHED_DEBUG is not set
# CONFIG_DEBUG_BUGVERBOSE is not set
CONFIG_SOC_TMS320C6678=y
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_SYSVIPC=y
CONFIG_SPARSE_IRQ=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_NAMESPACES=y
# CONFIG_UTS_NS is not set
# CONFIG_USER_NS is not set
# CONFIG_PID_NS is not set
CONFIG_BLK_DEV_INITRD=y
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_EXPERT=y
# CONFIG_FUTEX is not set
# CONFIG_SLUB_DEBUG is not set
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE=""
# CONFIG_CMDLINE_FORCE is not set
CONFIG_BOARD_EVM6678=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=2
CONFIG_BLK_DEV_RAM_SIZE=17000
# CONFIG_INPUT is not set
# CONFIG_SERIO is not set
# CONFIG_VT is not set
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
# CONFIG_USB_SUPPORT is not set
# CONFIG_IOMMU_SUPPORT is not set
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_CRC16=y
# CONFIG_ENABLE_MUST_CHECK is not set
# CONFIG_SCHED_DEBUG is not set
# CONFIG_DEBUG_BUGVERBOSE is not set
# SPDX-License-Identifier: GPL-2.0
generic-y += extable.h
generic-y += kvm_para.h
generic-y += mcs_spinlock.h
generic-y += user.h
#include <generated/asm-offsets.h>
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#ifndef _ASM_C6X_BITOPS_H
#define _ASM_C6X_BITOPS_H
#ifdef __KERNEL__
#include <linux/bitops.h>
#include <asm/byteorder.h>
#include <asm/barrier.h>
/*
* We are lucky, DSP is perfect for bitops: do it in 3 cycles
*/
/**
* __ffs - find first bit in word.
* @word: The word to search
*
* Undefined if no bit exists, so code should check against 0 first.
* Note __ffs(0) = undef, __ffs(1) = 0, __ffs(0x80000000) = 31.
*
*/
static inline unsigned long __ffs(unsigned long x)
{
asm (" bitr .M1 %0,%0\n"
" nop\n"
" lmbd .L1 1,%0,%0\n"
: "+a"(x));
return x;
}
/*
* ffz - find first zero in word.
* @word: The word to search
*
* Undefined if no zero exists, so code should check against ~0UL first.
*/
#define ffz(x) __ffs(~(x))
/**
* fls - find last (most-significant) bit set
* @x: the word to search
*
* This is defined the same way as ffs.
* Note fls(0) = 0, fls(1) = 1, fls(0x80000000) = 32.
*/
static inline int fls(unsigned int x)
{
if (!x)
return 0;
asm (" lmbd .L1 1,%0,%0\n" : "+a"(x));
return 32 - x;
}
/**
* ffs - find first bit set
* @x: the word to search
*
* This is defined the same way as
* the libc and compiler builtin ffs routines, therefore
* differs in spirit from the above ffz (man ffs).
* Note ffs(0) = 0, ffs(1) = 1, ffs(0x80000000) = 32.
*/
static inline int ffs(int x)
{
if (!x)
return 0;
return __ffs(x) + 1;
}
#include <asm-generic/bitops/__fls.h>
#include <asm-generic/bitops/fls64.h>
#include <asm-generic/bitops/find.h>
#include <asm-generic/bitops/sched.h>
#include <asm-generic/bitops/hweight.h>
#include <asm-generic/bitops/lock.h>
#include <asm-generic/bitops/atomic.h>
#include <asm-generic/bitops/non-atomic.h>
#include <asm-generic/bitops/le.h>
#include <asm-generic/bitops/ext2-atomic.h>
#endif /* __KERNEL__ */
#endif /* _ASM_C6X_BITOPS_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#ifndef _ASM_C6X_BUG_H
#define _ASM_C6X_BUG_H
#include <linux/linkage.h>
#include <asm-generic/bug.h>
struct pt_regs;
extern void die(char *str, struct pt_regs *fp, int nr);
extern asmlinkage int process_exception(struct pt_regs *regs);
extern asmlinkage void enable_exception(void);
#endif /* _ASM_C6X_BUG_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2005, 2006, 2009, 2010, 2012 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#ifndef _ASM_C6X_CACHE_H
#define _ASM_C6X_CACHE_H
#include <linux/irqflags.h>
#include <linux/init.h>
/*
* Cache line size
*/
#define L1D_CACHE_SHIFT 6
#define L1D_CACHE_BYTES (1 << L1D_CACHE_SHIFT)
#define L1P_CACHE_SHIFT 5
#define L1P_CACHE_BYTES (1 << L1P_CACHE_SHIFT)
#define L2_CACHE_SHIFT 7
#define L2_CACHE_BYTES (1 << L2_CACHE_SHIFT)
/*
* L2 used as cache
*/
#define L2MODE_SIZE L2MODE_256K_CACHE
/*
* For practical reasons the L1_CACHE_BYTES defines should not be smaller than
* the L2 line size
*/
#define L1_CACHE_SHIFT L2_CACHE_SHIFT
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
#define L2_CACHE_ALIGN_LOW(x) \
(((x) & ~(L2_CACHE_BYTES - 1)))
#define L2_CACHE_ALIGN_UP(x) \
(((x) + (L2_CACHE_BYTES - 1)) & ~(L2_CACHE_BYTES - 1))
#define L2_CACHE_ALIGN_CNT(x) \
(((x) + (sizeof(int) - 1)) & ~(sizeof(int) - 1))
#define ARCH_DMA_MINALIGN L1_CACHE_BYTES
#define ARCH_SLAB_MINALIGN L1_CACHE_BYTES
/*
* This is the granularity of hardware cacheability control.
*/
#define CACHEABILITY_ALIGN 0x01000000
/*
* Align a physical address to MAR regions
*/
#define CACHE_REGION_START(v) \
(((u32) (v)) & ~(CACHEABILITY_ALIGN - 1))
#define CACHE_REGION_END(v) \
(((u32) (v) + (CACHEABILITY_ALIGN - 1)) & ~(CACHEABILITY_ALIGN - 1))
extern void __init c6x_cache_init(void);
extern void enable_caching(unsigned long start, unsigned long end);
extern void disable_caching(unsigned long start, unsigned long end);
extern void L1_cache_off(void);
extern void L1_cache_on(void);
extern void L1P_cache_global_invalidate(void);
extern void L1D_cache_global_invalidate(void);
extern void L1D_cache_global_writeback(void);
extern void L1D_cache_global_writeback_invalidate(void);
extern void L2_cache_set_mode(unsigned int mode);
extern void L2_cache_global_writeback_invalidate(void);
extern void L2_cache_global_writeback(void);
extern void L1P_cache_block_invalidate(unsigned int start, unsigned int end);
extern void L1D_cache_block_invalidate(unsigned int start, unsigned int end);
extern void L1D_cache_block_writeback_invalidate(unsigned int start,
unsigned int end);
extern void L1D_cache_block_writeback(unsigned int start, unsigned int end);
extern void L2_cache_block_invalidate(unsigned int start, unsigned int end);
extern void L2_cache_block_writeback(unsigned int start, unsigned int end);
extern void L2_cache_block_writeback_invalidate(unsigned int start,
unsigned int end);
extern void L2_cache_block_invalidate_nowait(unsigned int start,
unsigned int end);
extern void L2_cache_block_writeback_nowait(unsigned int start,
unsigned int end);
extern void L2_cache_block_writeback_invalidate_nowait(unsigned int start,
unsigned int end);
#endif /* _ASM_C6X_CACHE_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#ifndef _ASM_C6X_CACHEFLUSH_H
#define _ASM_C6X_CACHEFLUSH_H
#include <linux/spinlock.h>
#include <asm/setup.h>
#include <asm/cache.h>
#include <asm/mman.h>
#include <asm/page.h>
#include <asm/string.h>
/*
* physically-indexed cache management
*/
#define flush_icache_range(s, e) \
do { \
L1D_cache_block_writeback((s), (e)); \
L1P_cache_block_invalidate((s), (e)); \
} while (0)
#define flush_icache_page(vma, page) \
do { \
if ((vma)->vm_flags & PROT_EXEC) \
L1D_cache_block_writeback_invalidate(page_address(page), \
(unsigned long) page_address(page) + PAGE_SIZE)); \
L1P_cache_block_invalidate(page_address(page), \
(unsigned long) page_address(page) + PAGE_SIZE)); \
} while (0)
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
do { \
memcpy(dst, src, len); \
flush_icache_range((unsigned) (dst), (unsigned) (dst) + (len)); \
} while (0)
#include <asm-generic/cacheflush.h>
#endif /* _ASM_C6X_CACHEFLUSH_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2011 Texas Instruments Incorporated
* Author: Mark Salter <msalter@redhat.com>
*/
#ifndef _ASM_C6X_CHECKSUM_H
#define _ASM_C6X_CHECKSUM_H
static inline __wsum
csum_tcpudp_nofold(__be32 saddr, __be32 daddr, __u32 len,
__u8 proto, __wsum sum)
{
unsigned long long tmp;
asm ("add .d1 %1,%5,%1\n"
"|| addu .l1 %3,%4,%0\n"
"addu .l1 %2,%0,%0\n"
#ifndef CONFIG_CPU_BIG_ENDIAN
"|| shl .s1 %1,8,%1\n"
#endif
"addu .l1 %1,%0,%0\n"
"add .l1 %P0,%p0,%2\n"
: "=&a"(tmp), "+a"(len), "+a"(sum)
: "a" (saddr), "a" (daddr), "a" (proto));
return sum;
}
#define csum_tcpudp_nofold csum_tcpudp_nofold
#define _HAVE_ARCH_CSUM_AND_COPY
extern __wsum csum_partial_copy_nocheck(const void *src, void *dst, int len);
#include <asm-generic/checksum.h>
#endif /* _ASM_C6X_CHECKSUM_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* TI C64X clock definitions
*
* Copyright (C) 2010, 2011 Texas Instruments.
* Contributed by: Mark Salter <msalter@redhat.com>
*
* Copied heavily from arm/mach-davinci/clock.h, so:
*
* Copyright (C) 2006-2007 Texas Instruments.
* Copyright (C) 2008-2009 Deep Root Systems, LLC
*/
#ifndef _ASM_C6X_CLOCK_H
#define _ASM_C6X_CLOCK_H
#ifndef __ASSEMBLER__
#include <linux/list.h>
/* PLL/Reset register offsets */
#define PLLCTL 0x100
#define PLLM 0x110
#define PLLPRE 0x114
#define PLLDIV1 0x118
#define PLLDIV2 0x11c
#define PLLDIV3 0x120
#define PLLPOST 0x128
#define PLLCMD 0x138
#define PLLSTAT 0x13c
#define PLLALNCTL 0x140
#define PLLDCHANGE 0x144
#define PLLCKEN 0x148
#define PLLCKSTAT 0x14c
#define PLLSYSTAT 0x150
#define PLLDIV4 0x160
#define PLLDIV5 0x164
#define PLLDIV6 0x168
#define PLLDIV7 0x16c
#define PLLDIV8 0x170
#define PLLDIV9 0x174
#define PLLDIV10 0x178
#define PLLDIV11 0x17c
#define PLLDIV12 0x180
#define PLLDIV13 0x184
#define PLLDIV14 0x188
#define PLLDIV15 0x18c
#define PLLDIV16 0x190
/* PLLM register bits */
#define PLLM_PLLM_MASK 0xff
#define PLLM_VAL(x) ((x) - 1)
/* PREDIV register bits */
#define PLLPREDIV_EN BIT(15)
#define PLLPREDIV_VAL(x) ((x) - 1)
/* PLLCTL register bits */
#define PLLCTL_PLLEN BIT(0)
#define PLLCTL_PLLPWRDN BIT(1)
#define PLLCTL_PLLRST BIT(3)
#define PLLCTL_PLLDIS BIT(4)
#define PLLCTL_PLLENSRC BIT(5)
#define PLLCTL_CLKMODE BIT(8)
/* PLLCMD register bits */
#define PLLCMD_GOSTAT BIT(0)
/* PLLSTAT register bits */
#define PLLSTAT_GOSTAT BIT(0)
/* PLLDIV register bits */
#define PLLDIV_EN BIT(15)
#define PLLDIV_RATIO_MASK 0x1f
#define PLLDIV_RATIO(x) ((x) - 1)
struct pll_data;
struct clk {
struct list_head node;
struct module *owner;
const char *name;
unsigned long rate;
int usecount;
u32 flags;
struct clk *parent;
struct list_head children; /* list of children */
struct list_head childnode; /* parent's child list node */
struct pll_data *pll_data;
u32 div;
unsigned long (*recalc) (struct clk *);
int (*set_rate) (struct clk *clk, unsigned long rate);
int (*round_rate) (struct clk *clk, unsigned long rate);
};
/* Clock flags: SoC-specific flags start at BIT(16) */
#define ALWAYS_ENABLED BIT(1)
#define CLK_PLL BIT(2) /* PLL-derived clock */
#define PRE_PLL BIT(3) /* source is before PLL mult/div */
#define FIXED_DIV_PLL BIT(4) /* fixed divisor from PLL */
#define FIXED_RATE_PLL BIT(5) /* fixed output rate PLL */
#define MAX_PLL_SYSCLKS 16
struct pll_data {
void __iomem *base;
u32 num;
u32 flags;
u32 input_rate;
u32 bypass_delay; /* in loops */
u32 reset_delay; /* in loops */
u32 lock_delay; /* in loops */
struct clk sysclks[MAX_PLL_SYSCLKS + 1];
};
/* pll_data flag bit */
#define PLL_HAS_PRE BIT(0)
#define PLL_HAS_MUL BIT(1)
#define PLL_HAS_POST BIT(2)
#define CLK(dev, con, ck) \
{ \
.dev_id = dev, \
.con_id = con, \
.clk = ck, \
} \
extern void c6x_clks_init(struct clk_lookup *clocks);
extern int clk_register(struct clk *clk);
extern void clk_unregister(struct clk *clk);
extern void c64x_setup_clocks(void);
extern struct pll_data c6x_soc_pll1;
extern struct clk clkin1;
extern struct clk c6x_core_clk;
extern struct clk c6x_i2c_clk;
extern struct clk c6x_watchdog_clk;
extern struct clk c6x_mcbsp1_clk;
extern struct clk c6x_mcbsp2_clk;
extern struct clk c6x_mdio_clk;
#endif
#endif /* _ASM_C6X_CLOCK_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#ifndef _ASM_C6X_CMPXCHG_H
#define _ASM_C6X_CMPXCHG_H
#include <linux/irqflags.h>
/*
* Misc. functions
*/
static inline unsigned int __xchg(unsigned int x, volatile void *ptr, int size)
{
unsigned int tmp;
unsigned long flags;
local_irq_save(flags);
switch (size) {
case 1:
tmp = 0;
tmp = *((unsigned char *) ptr);
*((unsigned char *) ptr) = (unsigned char) x;
break;
case 2:
tmp = 0;
tmp = *((unsigned short *) ptr);
*((unsigned short *) ptr) = x;
break;
case 4:
tmp = 0;
tmp = *((unsigned int *) ptr);
*((unsigned int *) ptr) = x;
break;
}
local_irq_restore(flags);
return tmp;
}
#define xchg(ptr, x) \
((__typeof__(*(ptr)))__xchg((unsigned int)(x), (void *) (ptr), \
sizeof(*(ptr))))
#include <asm-generic/cmpxchg-local.h>
/*
* cmpxchg_local and cmpxchg64_local are atomic wrt current CPU. Always make
* them available.
*/
#define cmpxchg_local(ptr, o, n) \
((__typeof__(*(ptr)))__cmpxchg_local_generic((ptr), \
(unsigned long)(o), \
(unsigned long)(n), \
sizeof(*(ptr))))
#define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))
#include <asm-generic/cmpxchg.h>
#endif /* _ASM_C6X_CMPXCHG_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#ifndef _ASM_C6X_DELAY_H
#define _ASM_C6X_DELAY_H
#include <linux/kernel.h>
extern unsigned int ticks_per_ns_scaled;
static inline void __delay(unsigned long loops)
{
uint32_t tmp;
/* 6 cycles per loop */
asm volatile (" mv .s1 %0,%1\n"
"0: [%1] b .s1 0b\n"
" add .l1 -6,%0,%0\n"
" cmplt .l1 1,%0,%1\n"
" nop 3\n"
: "+a"(loops), "=A"(tmp));
}
static inline void _c6x_tickdelay(unsigned int x)
{
uint32_t cnt, endcnt;
asm volatile (" mvc .s2 TSCL,%0\n"
" add .s2x %0,%1,%2\n"
" || mvk .l2 1,B0\n"
"0: [B0] b .s2 0b\n"
" mvc .s2 TSCL,%0\n"
" sub .s2 %0,%2,%0\n"
" cmpgt .l2 0,%0,B0\n"
" nop 2\n"
: "=b"(cnt), "+a"(x), "=b"(endcnt) : : "B0");
}
/* use scaled math to avoid slow division */
#define C6X_NDELAY_SCALE 10
static inline void _ndelay(unsigned int n)
{
_c6x_tickdelay((ticks_per_ns_scaled * n) >> C6X_NDELAY_SCALE);
}
static inline void _udelay(unsigned int n)
{
while (n >= 10) {
_ndelay(10000);
n -= 10;
}
while (n-- > 0)
_ndelay(1000);
}
#define udelay(x) _udelay((unsigned int)(x))
#define ndelay(x) _ndelay((unsigned int)(x))
#endif /* _ASM_C6X_DELAY_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2011 Texas Instruments Incorporated
* Author: Mark Salter <msalter@redhat.com>
*/
#ifndef _ASM_C6X_DSCR_H
#define _ASM_C6X_DSCR_H
enum dscr_devstate_t {
DSCR_DEVSTATE_ENABLED,
DSCR_DEVSTATE_DISABLED,
};
/*
* Set the device state of the device with the given ID.
*
* Individual drivers should use this to enable or disable the
* hardware device. The devid used to identify the device being
* controlled should be a property in the device's tree node.
*/
extern void dscr_set_devstate(int devid, enum dscr_devstate_t state);
/*
* Assert or de-assert an RMII reset.
*/
extern void dscr_rmii_reset(int id, int assert);
extern void dscr_probe(void);
#endif /* _ASM_C6X_DSCR_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#ifndef _ASM_C6X_ELF_H
#define _ASM_C6X_ELF_H
/*
* ELF register definitions..
*/
#include <asm/ptrace.h>
typedef unsigned long elf_greg_t;
typedef unsigned long elf_fpreg_t;
#define ELF_NGREG 58
#define ELF_NFPREG 1
typedef elf_greg_t elf_gregset_t[ELF_NGREG];
typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
/*
* This is used to ensure we don't load something for the wrong architecture.
*/
#define elf_check_arch(x) ((x)->e_machine == EM_TI_C6000)
#define elf_check_fdpic(x) (1)
#define elf_check_const_displacement(x) (0)
#define ELF_FDPIC_PLAT_INIT(_regs, _exec_map, _interp_map, _dynamic_addr) \
do { \
_regs->b4 = (_exec_map); \
_regs->a6 = (_interp_map); \
_regs->b6 = (_dynamic_addr); \
} while (0)
#define ELF_FDPIC_CORE_EFLAGS 0
/*
* These are used to set parameters in the core dumps.
*/
#ifdef __LITTLE_ENDIAN__
#define ELF_DATA ELFDATA2LSB
#else
#define ELF_DATA ELFDATA2MSB
#endif
#define ELF_CLASS ELFCLASS32
#define ELF_ARCH EM_TI_C6000
/* Nothing for now. Need to setup DP... */
#define ELF_PLAT_INIT(_r)
#define ELF_EXEC_PAGESIZE 4096
#define ELF_CORE_COPY_REGS(_dest, _regs) \
memcpy((char *) &_dest, (char *) _regs, \
sizeof(struct pt_regs));
/* This yields a mask that user programs can use to figure out what
instruction set this cpu supports. */
#define ELF_HWCAP (0)
/* This yields a string that ld.so will use to load implementation
specific libraries for optimization. This is more specific in
intent than poking at uname or /proc/cpuinfo. */
#define ELF_PLATFORM (NULL)
/* C6X specific section types */
#define SHT_C6000_UNWIND 0x70000001
#define SHT_C6000_PREEMPTMAP 0x70000002
#define SHT_C6000_ATTRIBUTES 0x70000003
/* C6X specific DT_ tags */
#define DT_C6000_DSBT_BASE 0x70000000
#define DT_C6000_DSBT_SIZE 0x70000001
#define DT_C6000_PREEMPTMAP 0x70000002
#define DT_C6000_DSBT_INDEX 0x70000003
/* C6X specific relocs */
#define R_C6000_NONE 0
#define R_C6000_ABS32 1
#define R_C6000_ABS16 2
#define R_C6000_ABS8 3
#define R_C6000_PCR_S21 4
#define R_C6000_PCR_S12 5
#define R_C6000_PCR_S10 6
#define R_C6000_PCR_S7 7
#define R_C6000_ABS_S16 8
#define R_C6000_ABS_L16 9
#define R_C6000_ABS_H16 10
#define R_C6000_SBR_U15_B 11
#define R_C6000_SBR_U15_H 12
#define R_C6000_SBR_U15_W 13
#define R_C6000_SBR_S16 14
#define R_C6000_SBR_L16_B 15
#define R_C6000_SBR_L16_H 16
#define R_C6000_SBR_L16_W 17
#define R_C6000_SBR_H16_B 18
#define R_C6000_SBR_H16_H 19
#define R_C6000_SBR_H16_W 20
#define R_C6000_SBR_GOT_U15_W 21
#define R_C6000_SBR_GOT_L16_W 22
#define R_C6000_SBR_GOT_H16_W 23
#define R_C6000_DSBT_INDEX 24
#define R_C6000_PREL31 25
#define R_C6000_COPY 26
#define R_C6000_ALIGN 253
#define R_C6000_FPHEAD 254
#define R_C6000_NOCMP 255
#endif /*_ASM_C6X_ELF_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_C6X_FLAT_H
#define __ASM_C6X_FLAT_H
#include <asm/unaligned.h>
static inline int flat_get_addr_from_rp(u32 __user *rp, u32 relval, u32 flags,
u32 *addr)
{
*addr = get_unaligned((__force u32 *)rp);
return 0;
}
static inline int flat_put_addr_at_rp(u32 __user *rp, u32 addr, u32 rel)
{
put_unaligned(addr, (__force u32 *)rp);
return 0;
}
#endif /* __ASM_C6X_FLAT_H */
#ifndef _ASM_C6X_FTRACE_H
#define _ASM_C6X_FTRACE_H
/* empty */
#endif /* _ASM_C6X_FTRACE_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#ifndef _ASM_C6X_HARDIRQ_H
#define _ASM_C6X_HARDIRQ_H
extern void ack_bad_irq(int irq);
#define ack_bad_irq ack_bad_irq
#include <asm-generic/hardirq.h>
#endif /* _ASM_C6X_HARDIRQ_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2006, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*
* Large parts taken directly from powerpc.
*/
#ifndef _ASM_C6X_IRQ_H
#define _ASM_C6X_IRQ_H
#include <linux/irqdomain.h>
#include <linux/threads.h>
#include <linux/list.h>
#include <linux/radix-tree.h>
#include <asm/percpu.h>
#define irq_canonicalize(irq) (irq)
/*
* The C64X+ core has 16 IRQ vectors. One each is used by Reset and NMI. Two
* are reserved. The remaining 12 vectors are used to route SoC interrupts.
* These interrupt vectors are prioritized with IRQ 4 having the highest
* priority and IRQ 15 having the lowest.
*
* The C64x+ megamodule provides a PIC which combines SoC IRQ sources into a
* single core IRQ vector. There are four combined sources, each of which
* feed into one of the 12 general interrupt vectors. The remaining 8 vectors
* can each route a single SoC interrupt directly.
*/
#define NR_PRIORITY_IRQS 16
/* Total number of virq in the platform */
#define NR_IRQS 256
/* This number is used when no interrupt has been assigned */
#define NO_IRQ 0
extern void __init init_pic_c64xplus(void);
extern void init_IRQ(void);
struct pt_regs;
extern asmlinkage void c6x_do_IRQ(unsigned int prio, struct pt_regs *regs);
extern unsigned long irq_err_count;
#endif /* _ASM_C6X_IRQ_H */
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* C6X IRQ flag handling
*
* Copyright (C) 2010 Texas Instruments Incorporated
* Written by Mark Salter (msalter@redhat.com)
*/
#ifndef _ASM_IRQFLAGS_H
#define _ASM_IRQFLAGS_H
#ifndef __ASSEMBLY__
/* read interrupt enabled status */
static inline unsigned long arch_local_save_flags(void)
{
unsigned long flags;
asm volatile (" mvc .s2 CSR,%0\n" : "=b"(flags));
return flags;
}
/* set interrupt enabled status */
static inline void arch_local_irq_restore(unsigned long flags)
{
asm volatile (" mvc .s2 %0,CSR\n" : : "b"(flags) : "memory");
}
/* unconditionally enable interrupts */
static inline void arch_local_irq_enable(void)
{
unsigned long flags = arch_local_save_flags();
flags |= 1;
arch_local_irq_restore(flags);
}
/* unconditionally disable interrupts */
static inline void arch_local_irq_disable(void)
{
unsigned long flags = arch_local_save_flags();
flags &= ~1;
arch_local_irq_restore(flags);
}
/* get status and disable interrupts */
static inline unsigned long arch_local_irq_save(void)
{
unsigned long flags;
flags = arch_local_save_flags();
arch_local_irq_restore(flags & ~1);
return flags;
}
/* test flags */
static inline int arch_irqs_disabled_flags(unsigned long flags)
{
return (flags & 1) == 0;
}
/* test hardware interrupt enable bit */
static inline int arch_irqs_disabled(void)
{
return arch_irqs_disabled_flags(arch_local_save_flags());
}
#endif /* __ASSEMBLY__ */
#endif /* __ASM_IRQFLAGS_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_C6X_LINKAGE_H
#define _ASM_C6X_LINKAGE_H
#ifdef __ASSEMBLER__
#define __ALIGN .align 2
#define __ALIGN_STR ".align 2"
#ifndef __DSBT__
#define ENTRY(name) \
.global name @ \
__ALIGN @ \
name:
#else
#define ENTRY(name) \
.global name @ \
.hidden name @ \
__ALIGN @ \
name:
#endif
#define ENDPROC(name) \
.type name, @function @ \
.size name, . - name
#endif
#include <asm-generic/linkage.h>
#endif /* _ASM_C6X_LINKAGE_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _C6X_MEGAMOD_PIC_H
#define _C6X_MEGAMOD_PIC_H
#ifdef __KERNEL__
extern void __init megamod_pic_init(void);
#endif /* __KERNEL__ */
#endif /* _C6X_MEGAMOD_PIC_H */
#ifndef _ASM_C6X_MMU_CONTEXT_H
#define _ASM_C6X_MMU_CONTEXT_H
#include <asm-generic/nommu_context.h>
#endif /* _ASM_C6X_MMU_CONTEXT_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*
* Updated for 2.6.34 by: Mark Salter (msalter@redhat.com)
*/
#ifndef _ASM_C6X_MODULE_H
#define _ASM_C6X_MODULE_H
#include <asm-generic/module.h>
struct loaded_sections {
unsigned int new_vaddr;
unsigned int loaded;
};
#endif /* _ASM_C6X_MODULE_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_C6X_PAGE_H
#define _ASM_C6X_PAGE_H
#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC
#include <asm-generic/page.h>
#endif /* _ASM_C6X_PAGE_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#ifndef _ASM_C6X_PGTABLE_H
#define _ASM_C6X_PGTABLE_H
#include <asm-generic/pgtable-nopud.h>
#include <asm/setup.h>
#include <asm/page.h>
/*
* All 32bit addresses are effectively valid for vmalloc...
* Sort of meaningless for non-VM targets.
*/
#define VMALLOC_START 0
#define VMALLOC_END 0xffffffff
#define pgd_present(pgd) (1)
#define pgd_none(pgd) (0)
#define pgd_bad(pgd) (0)
#define pgd_clear(pgdp)
#define kern_addr_valid(addr) (1)
#define pmd_none(x) (!pmd_val(x))
#define pmd_present(x) (pmd_val(x))
#define pmd_clear(xp) do { set_pmd(xp, __pmd(0)); } while (0)
#define pmd_bad(x) (pmd_val(x) & ~PAGE_MASK)
#define PAGE_NONE __pgprot(0) /* these mean nothing to NO_MM */
#define PAGE_SHARED __pgprot(0) /* these mean nothing to NO_MM */
#define PAGE_COPY __pgprot(0) /* these mean nothing to NO_MM */
#define PAGE_READONLY __pgprot(0) /* these mean nothing to NO_MM */
#define PAGE_KERNEL __pgprot(0) /* these mean nothing to NO_MM */
#define pgprot_noncached(prot) (prot)
extern void paging_init(void);
#define __swp_type(x) (0)
#define __swp_offset(x) (0)
#define __swp_entry(typ, off) ((swp_entry_t) { ((typ) | ((off) << 7)) })
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
#define set_pte(pteptr, pteval) (*(pteptr) = pteval)
#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval)
/*
* ZERO_PAGE is a global shared page that is always zero: used
* for zero-mapped memory areas etc..
*/
#define ZERO_PAGE(vaddr) virt_to_page(empty_zero_page)
extern unsigned long empty_zero_page;
#define swapper_pg_dir ((pgd_t *) 0)
/*
* c6x is !MMU, so define the simpliest implementation
*/
#define pgprot_writecombine pgprot_noncached
#endif /* _ASM_C6X_PGTABLE_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*
* Updated for 2.6.34: Mark Salter <msalter@redhat.com>
*/
#ifndef _ASM_C6X_PROCESSOR_H
#define _ASM_C6X_PROCESSOR_H
#include <asm/ptrace.h>
#include <asm/page.h>
#include <asm/current.h>
/*
* User space process size. This is mostly meaningless for NOMMU
* but some C6X processors may have RAM addresses up to 0xFFFFFFFF.
* Since calls like mmap() can return an address or an error, we
* have to allow room for error returns when code does something
* like:
*
* addr = do_mmap(...)
* if ((unsigned long)addr >= TASK_SIZE)
* ... its an error code, not an address ...
*
* Here, we allow for 4096 error codes which means we really can't
* use the last 4K page on systems with RAM extending all the way
* to the end of the 32-bit address space.
*/
#define TASK_SIZE 0xFFFFF000
/*
* This decides where the kernel will search for a free chunk of vm
* space during mmap's. We won't be using it
*/
#define TASK_UNMAPPED_BASE 0
struct thread_struct {
unsigned long long b15_14;
unsigned long long a15_14;
unsigned long long b13_12;
unsigned long long a13_12;
unsigned long long b11_10;
unsigned long long a11_10;
unsigned long long ricl_icl;
unsigned long usp; /* user stack pointer */
unsigned long pc; /* kernel pc */
unsigned long wchan;
};
#define INIT_THREAD \
{ \
.usp = 0, \
.wchan = 0, \
}
#define INIT_MMAP { \
&init_mm, 0, 0, NULL, PAGE_SHARED, VM_READ | VM_WRITE | VM_EXEC, 1, \
NULL, NULL }
#define task_pt_regs(task) \
((struct pt_regs *)(THREAD_START_SP + task_stack_page(task)) - 1)
#define alloc_kernel_stack() __get_free_page(GFP_KERNEL)
#define free_kernel_stack(page) free_page((page))
/* Forward declaration, a strange C thing */
struct task_struct;
extern void start_thread(struct pt_regs *regs, unsigned int pc,
unsigned long usp);
/* Free all resources held by a thread. */
static inline void release_thread(struct task_struct *dead_task)
{
}
/*
* saved kernel SP and DP of a blocked thread.
*/
#ifdef _BIG_ENDIAN
#define thread_saved_ksp(tsk) \
(*(unsigned long *)&(tsk)->thread.b15_14)
#define thread_saved_dp(tsk) \
(*(((unsigned long *)&(tsk)->thread.b15_14) + 1))
#else
#define thread_saved_ksp(tsk) \
(*(((unsigned long *)&(tsk)->thread.b15_14) + 1))
#define thread_saved_dp(tsk) \
(*(unsigned long *)&(tsk)->thread.b15_14)
#endif
extern unsigned long get_wchan(struct task_struct *p);
#define KSTK_EIP(task) (task_pt_regs(task)->pc)
#define KSTK_ESP(task) (task_pt_regs(task)->sp)
#define cpu_relax() do { } while (0)
extern const struct seq_operations cpuinfo_op;
/* Reset the board */
#define HARD_RESET_NOW()
extern unsigned int c6x_core_freq;
extern void (*c6x_restart)(void);
extern void (*c6x_halt)(void);
#endif /* ASM_C6X_PROCESSOR_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2010 Texas Instruments Incorporated
* Author: Mark Salter (msalter@redhat.com)
*/
#ifndef _ASM_C6X_PROCINFO_H
#define _ASM_C6X_PROCINFO_H
#ifdef __KERNEL__
struct proc_info_list {
unsigned int cpu_val;
unsigned int cpu_mask;
const char *arch_name;
const char *elf_name;
unsigned int elf_hwcap;
};
#else /* __KERNEL__ */
#include <asm/elf.h>
#warning "Please include asm/elf.h instead"
#endif /* __KERNEL__ */
#endif /* _ASM_C6X_PROCINFO_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2004, 2006, 2009, 2010 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*
* Updated for 2.6.34: Mark Salter <msalter@redhat.com>
*/
#ifndef _ASM_C6X_PTRACE_H
#define _ASM_C6X_PTRACE_H
#include <uapi/asm/ptrace.h>
#ifndef __ASSEMBLY__
#ifdef _BIG_ENDIAN
#else
#endif
#include <linux/linkage.h>
#define user_mode(regs) ((((regs)->tsr) & 0x40) != 0)
#define instruction_pointer(regs) ((regs)->pc)
#define profile_pc(regs) instruction_pointer(regs)
#define user_stack_pointer(regs) ((regs)->sp)
extern void show_regs(struct pt_regs *);
extern asmlinkage unsigned long syscall_trace_entry(struct pt_regs *regs);
extern asmlinkage void syscall_trace_exit(struct pt_regs *regs);
#endif /* __ASSEMBLY__ */
#endif /* _ASM_C6X_PTRACE_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_C6X_SECTIONS_H
#define _ASM_C6X_SECTIONS_H
#include <asm-generic/sections.h>
extern char _vectors_start[];
extern char _vectors_end[];
extern char _data_lma[];
#endif /* _ASM_C6X_SECTIONS_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#ifndef _ASM_C6X_SETUP_H
#define _ASM_C6X_SETUP_H
#include <uapi/asm/setup.h>
#include <linux/types.h>
#ifndef __ASSEMBLY__
extern int c6x_add_memory(phys_addr_t start, unsigned long size);
extern unsigned long ram_start;
extern unsigned long ram_end;
extern int c6x_num_cores;
extern unsigned int c6x_silicon_rev;
extern unsigned int c6x_devstat;
extern unsigned char c6x_fuse_mac[6];
extern void machine_init(unsigned long dt_ptr);
extern void time_init(void);
extern void coherent_mem_init(u32 start, u32 size);
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_C6X_SETUP_H */
/*
* Miscellaneous SoC-specific hooks.
*
* Copyright (C) 2011 Texas Instruments Incorporated
*
* Author: Mark Salter <msalter@redhat.com>
*
* This file is licensed under the terms of the GNU General Public License
* version 2. This program is licensed "as is" without any warranty of any
* kind, whether express or implied.
*/
#ifndef _ASM_C6X_SOC_H
#define _ASM_C6X_SOC_H
struct soc_ops {
/* Return active exception event or -1 if none */
int (*get_exception)(void);
/* Assert an event */
void (*assert_event)(unsigned int evt);
};
extern struct soc_ops soc_ops;
extern int soc_get_exception(void);
extern void soc_assert_event(unsigned int event);
extern int soc_mac_addr(unsigned int index, u8 *addr);
/*
* for mmio on SoC devices. regs are always same byte order as cpu.
*/
#define soc_readl(addr) __raw_readl(addr)
#define soc_writel(b, addr) __raw_writel((b), (addr))
#endif /* _ASM_C6X_SOC_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#ifndef _ASM_C6X_SPECIAL_INSNS_H
#define _ASM_C6X_SPECIAL_INSNS_H
#define get_creg(reg) \
({ unsigned int __x; \
asm volatile ("mvc .s2 " #reg ",%0\n" : "=b"(__x)); __x; })
#define set_creg(reg, v) \
do { unsigned int __x = (unsigned int)(v); \
asm volatile ("mvc .s2 %0," #reg "\n" : : "b"(__x)); \
} while (0)
#define or_creg(reg, n) \
do { unsigned __x, __n = (unsigned)(n); \
asm volatile ("mvc .s2 " #reg ",%0\n" \
"or .l2 %1,%0,%0\n" \
"mvc .s2 %0," #reg "\n" \
"nop\n" \
: "=&b"(__x) : "b"(__n)); \
} while (0)
#define and_creg(reg, n) \
do { unsigned __x, __n = (unsigned)(n); \
asm volatile ("mvc .s2 " #reg ",%0\n" \
"and .l2 %1,%0,%0\n" \
"mvc .s2 %0," #reg "\n" \
"nop\n" \
: "=&b"(__x) : "b"(__n)); \
} while (0)
#define get_coreid() (get_creg(DNUM) & 0xff)
/* Set/get IST */
#define set_ist(x) set_creg(ISTP, x)
#define get_ist() get_creg(ISTP)
/*
* Exception management
*/
#define disable_exception()
#define get_except_type() get_creg(EFR)
#define ack_exception(type) set_creg(ECR, 1 << (type))
#define get_iexcept() get_creg(IERR)
#define set_iexcept(mask) set_creg(IERR, (mask))
#define _extu(x, s, e) \
({ unsigned int __x; \
asm volatile ("extu .S2 %3,%1,%2,%0\n" : \
"=b"(__x) : "n"(s), "n"(e), "b"(x)); \
__x; })
#endif /* _ASM_C6X_SPECIAL_INSNS_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#ifndef _ASM_C6X_STRING_H
#define _ASM_C6X_STRING_H
#include <asm/page.h>
#include <linux/linkage.h>
asmlinkage extern void *memcpy(void *to, const void *from, size_t n);
#define __HAVE_ARCH_MEMCPY
#endif /* _ASM_C6X_STRING_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#ifndef _ASM_C6X_SWITCH_TO_H
#define _ASM_C6X_SWITCH_TO_H
#include <linux/linkage.h>
#define prepare_to_switch() do { } while (0)
struct task_struct;
struct thread_struct;
asmlinkage void *__switch_to(struct thread_struct *prev,
struct thread_struct *next,
struct task_struct *tsk);
#define switch_to(prev, next, last) \
do { \
current->thread.wchan = (u_long) __builtin_return_address(0); \
(last) = __switch_to(&(prev)->thread, \
&(next)->thread, (prev)); \
mb(); \
current->thread.wchan = 0; \
} while (0)
#endif /* _ASM_C6X_SWITCH_TO_H */
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2011 Texas Instruments Incorporated
* Author: Mark Salter <msalter@redhat.com>
*/
#ifndef __ASM_C6X_SYSCALL_H
#define __ASM_C6X_SYSCALL_H
#include <uapi/linux/audit.h>
#include <linux/err.h>
#include <linux/sched.h>
static inline int syscall_get_nr(struct task_struct *task,
struct pt_regs *regs)
{
return regs->b0;
}
static inline void syscall_rollback(struct task_struct *task,
struct pt_regs *regs)
{
/* do nothing */
}
static inline long syscall_get_error(struct task_struct *task,
struct pt_regs *regs)
{
return IS_ERR_VALUE(regs->a4) ? regs->a4 : 0;
}
static inline long syscall_get_return_value(struct task_struct *task,
struct pt_regs *regs)
{
return regs->a4;
}
static inline void syscall_set_return_value(struct task_struct *task,
struct pt_regs *regs,
int error, long val)
{
regs->a4 = error ?: val;
}
static inline void syscall_get_arguments(struct task_struct *task,
struct pt_regs *regs,
unsigned long *args)
{
*args++ = regs->a4;
*args++ = regs->b4;
*args++ = regs->a6;
*args++ = regs->b6;
*args++ = regs->a8;
*args = regs->b8;
}
static inline void syscall_set_arguments(struct task_struct *task,
struct pt_regs *regs,
const unsigned long *args)
{
regs->a4 = *args++;
regs->b4 = *args++;
regs->a6 = *args++;
regs->b6 = *args++;
regs->a8 = *args++;
regs->a9 = *args;
}
static inline int syscall_get_arch(struct task_struct *task)
{
return IS_ENABLED(CONFIG_CPU_BIG_ENDIAN)
? AUDIT_ARCH_C6XBE : AUDIT_ARCH_C6X;
}
#endif /* __ASM_C6X_SYSCALLS_H */
/*
* Copyright (C) 2011 Texas Instruments Incorporated
* Author: Mark Salter <msalter@redhat.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation, version 2.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for
* more details.
*/
#ifndef __ASM_C6X_SYSCALLS_H
#define __ASM_C6X_SYSCALLS_H
#include <linux/compiler.h>
#include <linux/linkage.h>
#include <linux/types.h>
/* The array of function pointers for syscalls. */
extern void *sys_call_table[];
/* The following are trampolines in entry.S to handle 64-bit arguments */
extern long sys_pread_c6x(unsigned int fd, char __user *buf,
size_t count, off_t pos_low, off_t pos_high);
extern long sys_pwrite_c6x(unsigned int fd, const char __user *buf,
size_t count, off_t pos_low, off_t pos_high);
extern long sys_truncate64_c6x(const char __user *path,
off_t length_low, off_t length_high);
extern long sys_ftruncate64_c6x(unsigned int fd,
off_t length_low, off_t length_high);
extern long sys_fadvise64_c6x(int fd, u32 offset_lo, u32 offset_hi,
u32 len, int advice);
extern long sys_fadvise64_64_c6x(int fd, u32 offset_lo, u32 offset_hi,
u32 len_lo, u32 len_hi, int advice);
extern long sys_fallocate_c6x(int fd, int mode,
u32 offset_lo, u32 offset_hi,
u32 len_lo, u32 len_hi);
extern int sys_cache_sync(unsigned long s, unsigned long e);
#include <asm-generic/syscalls.h>
#endif /* __ASM_C6X_SYSCALLS_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*
* Updated for 2.6.3x: Mark Salter <msalter@redhat.com>
*/
#ifndef _ASM_C6X_THREAD_INFO_H
#define _ASM_C6X_THREAD_INFO_H
#ifdef __KERNEL__
#include <asm/page.h>
#ifdef CONFIG_4KSTACKS
#define THREAD_SIZE 4096
#define THREAD_SHIFT 12
#define THREAD_SIZE_ORDER 0
#else
#define THREAD_SIZE 8192
#define THREAD_SHIFT 13
#define THREAD_SIZE_ORDER 1
#endif
#define THREAD_START_SP (THREAD_SIZE - 8)
#ifndef __ASSEMBLY__
typedef struct {
unsigned long seg;
} mm_segment_t;
/*
* low level task data.
*/
struct thread_info {
struct task_struct *task; /* main task structure */
unsigned long flags; /* low level flags */
int cpu; /* cpu we're on */
int preempt_count; /* 0 = preemptable, <0 = BUG */
mm_segment_t addr_limit; /* thread address space */
};
/*
* macros/functions for gaining access to the thread information structure
*
* preempt_count needs to be 1 initially, until the scheduler is functional.
*/
#define INIT_THREAD_INFO(tsk) \
{ \
.task = &tsk, \
.flags = 0, \
.cpu = 0, \
.preempt_count = INIT_PREEMPT_COUNT, \
.addr_limit = KERNEL_DS, \
}
/* get the thread information struct of current task */
static inline __attribute__((const))
struct thread_info *current_thread_info(void)
{
struct thread_info *ti;
asm volatile (" clr .s2 B15,0,%1,%0\n"
: "=b" (ti)
: "Iu5" (THREAD_SHIFT - 1));
return ti;
}
#define get_thread_info(ti) get_task_struct((ti)->task)
#define put_thread_info(ti) put_task_struct((ti)->task)
#endif /* __ASSEMBLY__ */
/*
* thread information flag bit numbers
* - pending work-to-be-done flags are in LSW
* - other flags in MSW
*/
#define TIF_SYSCALL_TRACE 0 /* syscall trace active */
#define TIF_NOTIFY_RESUME 1 /* resumption notification requested */
#define TIF_SIGPENDING 2 /* signal pending */
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
#define TIF_RESTORE_SIGMASK 4 /* restore signal mask in do_signal() */
#define TIF_NOTIFY_SIGNAL 5 /* signal notifications exist */
#define TIF_MEMDIE 17 /* OOM killer killed process */
#define TIF_WORK_MASK 0x00007FFE /* work on irq/exception return */
#define TIF_ALLWORK_MASK 0x00007FFF /* work on any return to u-space */
#endif /* __KERNEL__ */
#endif /* _ASM_C6X_THREAD_INFO_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _C6X_TIMER64_H
#define _C6X_TIMER64_H
extern void __init timer64_init(void);
#endif /* _C6X_TIMER64_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*
* Modified for 2.6.34: Mark Salter <msalter@redhat.com>
*/
#ifndef _ASM_C6X_TIMEX_H
#define _ASM_C6X_TIMEX_H
#define CLOCK_TICK_RATE ((1000 * 1000000UL) / 6)
/* 64-bit timestamp */
typedef unsigned long long cycles_t;
static inline cycles_t get_cycles(void)
{
unsigned l, h;
asm volatile (" dint\n"
" mvc .s2 TSCL,%0\n"
" mvc .s2 TSCH,%1\n"
" rint\n"
: "=b"(l), "=b"(h));
return ((cycles_t)h << 32) | l;
}
#endif /* _ASM_C6X_TIMEX_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_C6X_TLB_H
#define _ASM_C6X_TLB_H
#include <asm-generic/tlb.h>
#endif /* _ASM_C6X_TLB_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#ifndef _ASM_C6X_TRAPS_H
#define _ASM_C6X_TRAPS_H
#define EXCEPT_TYPE_NXF 31 /* NMI */
#define EXCEPT_TYPE_EXC 30 /* external exception */
#define EXCEPT_TYPE_IXF 1 /* internal exception */
#define EXCEPT_TYPE_SXF 0 /* software exception */
#define EXCEPT_CAUSE_LBX (1 << 7) /* loop buffer exception */
#define EXCEPT_CAUSE_PRX (1 << 6) /* privilege exception */
#define EXCEPT_CAUSE_RAX (1 << 5) /* resource access exception */
#define EXCEPT_CAUSE_RCX (1 << 4) /* resource conflict exception */
#define EXCEPT_CAUSE_OPX (1 << 3) /* opcode exception */
#define EXCEPT_CAUSE_EPX (1 << 2) /* execute packet exception */
#define EXCEPT_CAUSE_FPX (1 << 1) /* fetch packet exception */
#define EXCEPT_CAUSE_IFX (1 << 0) /* instruction fetch exception */
struct exception_info {
char *kernel_str;
int signo;
int code;
};
extern int (*c6x_nmi_handler)(struct pt_regs *regs);
#endif /* _ASM_C6X_TRAPS_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2011 Texas Instruments Incorporated
* Author: Mark Salter <msalter@redhat.com>
*/
#ifndef _ASM_C6X_UACCESS_H
#define _ASM_C6X_UACCESS_H
#include <linux/types.h>
#include <linux/compiler.h>
#include <linux/string.h>
/*
* C6X supports unaligned 32 and 64 bit loads and stores.
*/
static inline __must_check unsigned long
raw_copy_from_user(void *to, const void __user *from, unsigned long n)
{
u32 tmp32;
u64 tmp64;
if (__builtin_constant_p(n)) {
switch (n) {
case 1:
*(u8 *)to = *(u8 __force *)from;
return 0;
case 4:
asm volatile ("ldnw .d1t1 *%2,%0\n"
"nop 4\n"
"stnw .d1t1 %0,*%1\n"
: "=&a"(tmp32)
: "A"(to), "a"(from)
: "memory");
return 0;
case 8:
asm volatile ("ldndw .d1t1 *%2,%0\n"
"nop 4\n"
"stndw .d1t1 %0,*%1\n"
: "=&a"(tmp64)
: "a"(to), "a"(from)
: "memory");
return 0;
default:
break;
}
}
memcpy(to, (const void __force *)from, n);
return 0;
}
static inline __must_check unsigned long
raw_copy_to_user(void __user *to, const void *from, unsigned long n)
{
u32 tmp32;
u64 tmp64;
if (__builtin_constant_p(n)) {
switch (n) {
case 1:
*(u8 __force *)to = *(u8 *)from;
return 0;
case 4:
asm volatile ("ldnw .d1t1 *%2,%0\n"
"nop 4\n"
"stnw .d1t1 %0,*%1\n"
: "=&a"(tmp32)
: "a"(to), "a"(from)
: "memory");
return 0;
case 8:
asm volatile ("ldndw .d1t1 *%2,%0\n"
"nop 4\n"
"stndw .d1t1 %0,*%1\n"
: "=&a"(tmp64)
: "a"(to), "a"(from)
: "memory");
return 0;
default:
break;
}
}
memcpy((void __force *)to, from, n);
return 0;
}
#define INLINE_COPY_FROM_USER
#define INLINE_COPY_TO_USER
extern int _access_ok(unsigned long addr, unsigned long size);
#ifdef CONFIG_ACCESS_CHECK
#define __access_ok _access_ok
#endif
#include <asm-generic/uaccess.h>
#endif /* _ASM_C6X_UACCESS_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
* Rewritten for 2.6.3x: Mark Salter <msalter@redhat.com>
*/
#ifndef _ASM_C6X_UNALIGNED_H
#define _ASM_C6X_UNALIGNED_H
#include <linux/swab.h>
#include <linux/unaligned/generic.h>
/*
* The C64x+ can do unaligned word and dword accesses in hardware
* using special load/store instructions.
*/
static inline u16 get_unaligned_le16(const void *p)
{
const u8 *_p = p;
return _p[0] | _p[1] << 8;
}
static inline u16 get_unaligned_be16(const void *p)
{
const u8 *_p = p;
return _p[0] << 8 | _p[1];
}
static inline void put_unaligned_le16(u16 val, void *p)
{
u8 *_p = p;
_p[0] = val;
_p[1] = val >> 8;
}
static inline void put_unaligned_be16(u16 val, void *p)
{
u8 *_p = p;
_p[0] = val >> 8;
_p[1] = val;
}
static inline u32 get_unaligned32(const void *p)
{
u32 val = (u32) p;
asm (" ldnw .d1t1 *%0,%0\n"
" nop 4\n"
: "+a"(val));
return val;
}
static inline void put_unaligned32(u32 val, void *p)
{
asm volatile (" stnw .d2t1 %0,*%1\n"
: : "a"(val), "b"(p) : "memory");
}
static inline u64 get_unaligned64(const void *p)
{
u64 val;
asm volatile (" ldndw .d1t1 *%1,%0\n"
" nop 4\n"
: "=a"(val) : "a"(p));
return val;
}
static inline void put_unaligned64(u64 val, const void *p)
{
asm volatile (" stndw .d2t1 %0,*%1\n"
: : "a"(val), "b"(p) : "memory");
}
#ifdef CONFIG_CPU_BIG_ENDIAN
#define get_unaligned_le32(p) __swab32(get_unaligned32(p))
#define get_unaligned_le64(p) __swab64(get_unaligned64(p))
#define get_unaligned_be32(p) get_unaligned32(p)
#define get_unaligned_be64(p) get_unaligned64(p)
#define put_unaligned_le32(v, p) put_unaligned32(__swab32(v), (p))
#define put_unaligned_le64(v, p) put_unaligned64(__swab64(v), (p))
#define put_unaligned_be32(v, p) put_unaligned32((v), (p))
#define put_unaligned_be64(v, p) put_unaligned64((v), (p))
#define get_unaligned __get_unaligned_be
#define put_unaligned __put_unaligned_be
#else
#define get_unaligned_le32(p) get_unaligned32(p)
#define get_unaligned_le64(p) get_unaligned64(p)
#define get_unaligned_be32(p) __swab32(get_unaligned32(p))
#define get_unaligned_be64(p) __swab64(get_unaligned64(p))
#define put_unaligned_le32(v, p) put_unaligned32((v), (p))
#define put_unaligned_le64(v, p) put_unaligned64((v), (p))
#define put_unaligned_be32(v, p) put_unaligned32(__swab32(v), (p))
#define put_unaligned_be64(v, p) put_unaligned64(__swab64(v), (p))
#define get_unaligned __get_unaligned_le
#define put_unaligned __put_unaligned_le
#endif
#endif /* _ASM_C6X_UNALIGNED_H */
#ifndef _ASM_C6X_VMALLOC_H
#define _ASM_C6X_VMALLOC_H
#endif /* _ASM_C6X_VMALLOC_H */
# SPDX-License-Identifier: GPL-2.0
generic-y += ucontext.h
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
#ifndef _ASM_C6X_BYTEORDER_H
#define _ASM_C6X_BYTEORDER_H
#include <asm/types.h>
#ifdef _BIG_ENDIAN
#include <linux/byteorder/big_endian.h>
#else /* _BIG_ENDIAN */
#include <linux/byteorder/little_endian.h>
#endif /* _BIG_ENDIAN */
#endif /* _ASM_BYTEORDER_H */
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
* Copyright (C) 2004, 2006, 2009, 2010 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*
* Updated for 2.6.34: Mark Salter <msalter@redhat.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _UAPI_ASM_C6X_PTRACE_H
#define _UAPI_ASM_C6X_PTRACE_H
#define BKPT_OPCODE 0x56454314 /* illegal opcode */
#ifdef _BIG_ENDIAN
#define PT_LO(odd, even) odd
#define PT_HI(odd, even) even
#else
#define PT_LO(odd, even) even
#define PT_HI(odd, even) odd
#endif
#define PT_A4_ORG PT_LO(1, 0)
#define PT_TSR PT_HI(1, 0)
#define PT_ILC PT_LO(3, 2)
#define PT_RILC PT_HI(3, 2)
#define PT_CSR PT_LO(5, 4)
#define PT_PC PT_HI(5, 4)
#define PT_B16 PT_LO(7, 6)
#define PT_B17 PT_HI(7, 6)
#define PT_B18 PT_LO(9, 8)
#define PT_B19 PT_HI(9, 8)
#define PT_B20 PT_LO(11, 10)
#define PT_B21 PT_HI(11, 10)
#define PT_B22 PT_LO(13, 12)
#define PT_B23 PT_HI(13, 12)
#define PT_B24 PT_LO(15, 14)
#define PT_B25 PT_HI(15, 14)
#define PT_B26 PT_LO(17, 16)
#define PT_B27 PT_HI(17, 16)
#define PT_B28 PT_LO(19, 18)
#define PT_B29 PT_HI(19, 18)
#define PT_B30 PT_LO(21, 20)
#define PT_B31 PT_HI(21, 20)
#define PT_B0 PT_LO(23, 22)
#define PT_B1 PT_HI(23, 22)
#define PT_B2 PT_LO(25, 24)
#define PT_B3 PT_HI(25, 24)
#define PT_B4 PT_LO(27, 26)
#define PT_B5 PT_HI(27, 26)
#define PT_B6 PT_LO(29, 28)
#define PT_B7 PT_HI(29, 28)
#define PT_B8 PT_LO(31, 30)
#define PT_B9 PT_HI(31, 30)
#define PT_B10 PT_LO(33, 32)
#define PT_B11 PT_HI(33, 32)
#define PT_B12 PT_LO(35, 34)
#define PT_B13 PT_HI(35, 34)
#define PT_A16 PT_LO(37, 36)
#define PT_A17 PT_HI(37, 36)
#define PT_A18 PT_LO(39, 38)
#define PT_A19 PT_HI(39, 38)
#define PT_A20 PT_LO(41, 40)
#define PT_A21 PT_HI(41, 40)
#define PT_A22 PT_LO(43, 42)
#define PT_A23 PT_HI(43, 42)
#define PT_A24 PT_LO(45, 44)
#define PT_A25 PT_HI(45, 44)
#define PT_A26 PT_LO(47, 46)
#define PT_A27 PT_HI(47, 46)
#define PT_A28 PT_LO(49, 48)
#define PT_A29 PT_HI(49, 48)
#define PT_A30 PT_LO(51, 50)
#define PT_A31 PT_HI(51, 50)
#define PT_A0 PT_LO(53, 52)
#define PT_A1 PT_HI(53, 52)
#define PT_A2 PT_LO(55, 54)
#define PT_A3 PT_HI(55, 54)
#define PT_A4 PT_LO(57, 56)
#define PT_A5 PT_HI(57, 56)
#define PT_A6 PT_LO(59, 58)
#define PT_A7 PT_HI(59, 58)
#define PT_A8 PT_LO(61, 60)
#define PT_A9 PT_HI(61, 60)
#define PT_A10 PT_LO(63, 62)
#define PT_A11 PT_HI(63, 62)
#define PT_A12 PT_LO(65, 64)
#define PT_A13 PT_HI(65, 64)
#define PT_A14 PT_LO(67, 66)
#define PT_A15 PT_HI(67, 66)
#define PT_B14 PT_LO(69, 68)
#define PT_B15 PT_HI(69, 68)
#define NR_PTREGS 70
#define PT_DP PT_B14 /* Data Segment Pointer (B14) */
#define PT_SP PT_B15 /* Stack Pointer (B15) */
#define PTRACE_GETFDPIC 31 /* get the ELF fdpic loadmap address */
#define PTRACE_GETFDPIC_EXEC 0 /* [addr] request the executable loadmap */
#define PTRACE_GETFDPIC_INTERP 1 /* [addr] request the interpreter loadmap */
#ifndef __ASSEMBLY__
#ifdef _BIG_ENDIAN
#define REG_PAIR(odd, even) unsigned long odd; unsigned long even
#else
#define REG_PAIR(odd, even) unsigned long even; unsigned long odd
#endif
/*
* this struct defines the way the registers are stored on the
* stack during a system call. fields defined with REG_PAIR
* are saved and restored using double-word memory operations
* which means the word ordering of the pair depends on endianess.
*/
struct pt_regs {
REG_PAIR(tsr, orig_a4);
REG_PAIR(rilc, ilc);
REG_PAIR(pc, csr);
REG_PAIR(b17, b16);
REG_PAIR(b19, b18);
REG_PAIR(b21, b20);
REG_PAIR(b23, b22);
REG_PAIR(b25, b24);
REG_PAIR(b27, b26);
REG_PAIR(b29, b28);
REG_PAIR(b31, b30);
REG_PAIR(b1, b0);
REG_PAIR(b3, b2);
REG_PAIR(b5, b4);
REG_PAIR(b7, b6);
REG_PAIR(b9, b8);
REG_PAIR(b11, b10);
REG_PAIR(b13, b12);
REG_PAIR(a17, a16);
REG_PAIR(a19, a18);
REG_PAIR(a21, a20);
REG_PAIR(a23, a22);
REG_PAIR(a25, a24);
REG_PAIR(a27, a26);
REG_PAIR(a29, a28);
REG_PAIR(a31, a30);
REG_PAIR(a1, a0);
REG_PAIR(a3, a2);
REG_PAIR(a5, a4);
REG_PAIR(a7, a6);
REG_PAIR(a9, a8);
REG_PAIR(a11, a10);
REG_PAIR(a13, a12);
REG_PAIR(a15, a14);
REG_PAIR(sp, dp);
};
#endif /* __ASSEMBLY__ */
#endif /* _UAPI_ASM_C6X_PTRACE_H */
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
#ifndef _UAPI_ASM_C6X_SETUP_H
#define _UAPI_ASM_C6X_SETUP_H
#define COMMAND_LINE_SIZE 1024
#endif /* _UAPI_ASM_C6X_SETUP_H */
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ASM_C6X_SIGCONTEXT_H
#define _ASM_C6X_SIGCONTEXT_H
struct sigcontext {
unsigned long sc_mask; /* old sigmask */
unsigned long sc_sp; /* old user stack pointer */
unsigned long sc_a4;
unsigned long sc_b4;
unsigned long sc_a6;
unsigned long sc_b6;
unsigned long sc_a8;
unsigned long sc_b8;
unsigned long sc_a0;
unsigned long sc_a1;
unsigned long sc_a2;
unsigned long sc_a3;
unsigned long sc_a5;
unsigned long sc_a7;
unsigned long sc_a9;
unsigned long sc_b0;
unsigned long sc_b1;
unsigned long sc_b2;
unsigned long sc_b3;
unsigned long sc_b5;
unsigned long sc_b7;
unsigned long sc_b9;
unsigned long sc_a16;
unsigned long sc_a17;
unsigned long sc_a18;
unsigned long sc_a19;
unsigned long sc_a20;
unsigned long sc_a21;
unsigned long sc_a22;
unsigned long sc_a23;
unsigned long sc_a24;
unsigned long sc_a25;
unsigned long sc_a26;
unsigned long sc_a27;
unsigned long sc_a28;
unsigned long sc_a29;
unsigned long sc_a30;
unsigned long sc_a31;
unsigned long sc_b16;
unsigned long sc_b17;
unsigned long sc_b18;
unsigned long sc_b19;
unsigned long sc_b20;
unsigned long sc_b21;
unsigned long sc_b22;
unsigned long sc_b23;
unsigned long sc_b24;
unsigned long sc_b25;
unsigned long sc_b26;
unsigned long sc_b27;
unsigned long sc_b28;
unsigned long sc_b29;
unsigned long sc_b30;
unsigned long sc_b31;
unsigned long sc_csr;
unsigned long sc_pc;
};
#endif /* _ASM_C6X_SIGCONTEXT_H */
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
* Copyright (C) 2011 Texas Instruments Incorporated
* Author: Mark Salter <msalter@redhat.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ASM_C6X_SWAB_H
#define _ASM_C6X_SWAB_H
static inline __attribute_const__ __u16 __c6x_swab16(__u16 val)
{
asm("swap4 .l1 %0,%0\n" : "+a"(val));
return val;
}
static inline __attribute_const__ __u32 __c6x_swab32(__u32 val)
{
asm("swap4 .l1 %0,%0\n"
"swap2 .l1 %0,%0\n"
: "+a"(val));
return val;
}
static inline __attribute_const__ __u64 __c6x_swab64(__u64 val)
{
asm(" swap2 .s1 %p0,%P0\n"
"|| swap2 .l1 %P0,%p0\n"
" swap4 .l1 %p0,%p0\n"
" swap4 .l1 %P0,%P0\n"
: "+a"(val));
return val;
}
static inline __attribute_const__ __u32 __c6x_swahw32(__u32 val)
{
asm("swap2 .l1 %0,%0\n" : "+a"(val));
return val;
}
static inline __attribute_const__ __u32 __c6x_swahb32(__u32 val)
{
asm("swap4 .l1 %0,%0\n" : "+a"(val));
return val;
}
#define __arch_swab16 __c6x_swab16
#define __arch_swab32 __c6x_swab32
#define __arch_swab64 __c6x_swab64
#define __arch_swahw32 __c6x_swahw32
#define __arch_swahb32 __c6x_swahb32
#endif /* _ASM_C6X_SWAB_H */
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
* Copyright (C) 2011 Texas Instruments Incorporated
*
* Based on arch/tile version.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation, version 2.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for
* more details.
*/
#define __ARCH_WANT_RENAMEAT
#define __ARCH_WANT_STAT64
#define __ARCH_WANT_SET_GET_RLIMIT
#define __ARCH_WANT_SYS_CLONE
#define __ARCH_WANT_TIME32_SYSCALLS
/* Use the standard ABI for syscalls. */
#include <asm-generic/unistd.h>
/* C6X-specific syscalls. */
#define __NR_cache_sync (__NR_arch_specific_syscall + 0)
__SYSCALL(__NR_cache_sync, sys_cache_sync)
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for arch/c6x/kernel/
#
extra-y := head.o vmlinux.lds
obj-y := process.o traps.o irq.o signal.o ptrace.o
obj-y += setup.o sys_c6x.o time.o devicetree.o
obj-y += switch_to.o entry.o vectors.o c6x_ksyms.o
obj-y += soc.o
obj-$(CONFIG_MODULES) += module.o
// SPDX-License-Identifier: GPL-2.0
/*
* Generate definitions needed by assembly language modules.
* This code generates raw asm output which is post-processed
* to extract and format the required data.
*/
#include <linux/sched.h>
#include <linux/thread_info.h>
#include <asm/procinfo.h>
#include <linux/kbuild.h>
#include <linux/unistd.h>
void foo(void)
{
OFFSET(REGS_A16, pt_regs, a16);
OFFSET(REGS_A17, pt_regs, a17);
OFFSET(REGS_A18, pt_regs, a18);
OFFSET(REGS_A19, pt_regs, a19);
OFFSET(REGS_A20, pt_regs, a20);
OFFSET(REGS_A21, pt_regs, a21);
OFFSET(REGS_A22, pt_regs, a22);
OFFSET(REGS_A23, pt_regs, a23);
OFFSET(REGS_A24, pt_regs, a24);
OFFSET(REGS_A25, pt_regs, a25);
OFFSET(REGS_A26, pt_regs, a26);
OFFSET(REGS_A27, pt_regs, a27);
OFFSET(REGS_A28, pt_regs, a28);
OFFSET(REGS_A29, pt_regs, a29);
OFFSET(REGS_A30, pt_regs, a30);
OFFSET(REGS_A31, pt_regs, a31);
OFFSET(REGS_B16, pt_regs, b16);
OFFSET(REGS_B17, pt_regs, b17);
OFFSET(REGS_B18, pt_regs, b18);
OFFSET(REGS_B19, pt_regs, b19);
OFFSET(REGS_B20, pt_regs, b20);
OFFSET(REGS_B21, pt_regs, b21);
OFFSET(REGS_B22, pt_regs, b22);
OFFSET(REGS_B23, pt_regs, b23);
OFFSET(REGS_B24, pt_regs, b24);
OFFSET(REGS_B25, pt_regs, b25);
OFFSET(REGS_B26, pt_regs, b26);
OFFSET(REGS_B27, pt_regs, b27);
OFFSET(REGS_B28, pt_regs, b28);
OFFSET(REGS_B29, pt_regs, b29);
OFFSET(REGS_B30, pt_regs, b30);
OFFSET(REGS_B31, pt_regs, b31);
OFFSET(REGS_A0, pt_regs, a0);
OFFSET(REGS_A1, pt_regs, a1);
OFFSET(REGS_A2, pt_regs, a2);
OFFSET(REGS_A3, pt_regs, a3);
OFFSET(REGS_A4, pt_regs, a4);
OFFSET(REGS_A5, pt_regs, a5);
OFFSET(REGS_A6, pt_regs, a6);
OFFSET(REGS_A7, pt_regs, a7);
OFFSET(REGS_A8, pt_regs, a8);
OFFSET(REGS_A9, pt_regs, a9);
OFFSET(REGS_A10, pt_regs, a10);
OFFSET(REGS_A11, pt_regs, a11);
OFFSET(REGS_A12, pt_regs, a12);
OFFSET(REGS_A13, pt_regs, a13);
OFFSET(REGS_A14, pt_regs, a14);
OFFSET(REGS_A15, pt_regs, a15);
OFFSET(REGS_B0, pt_regs, b0);
OFFSET(REGS_B1, pt_regs, b1);
OFFSET(REGS_B2, pt_regs, b2);
OFFSET(REGS_B3, pt_regs, b3);
OFFSET(REGS_B4, pt_regs, b4);
OFFSET(REGS_B5, pt_regs, b5);
OFFSET(REGS_B6, pt_regs, b6);
OFFSET(REGS_B7, pt_regs, b7);
OFFSET(REGS_B8, pt_regs, b8);
OFFSET(REGS_B9, pt_regs, b9);
OFFSET(REGS_B10, pt_regs, b10);
OFFSET(REGS_B11, pt_regs, b11);
OFFSET(REGS_B12, pt_regs, b12);
OFFSET(REGS_B13, pt_regs, b13);
OFFSET(REGS_DP, pt_regs, dp);
OFFSET(REGS_SP, pt_regs, sp);
OFFSET(REGS_TSR, pt_regs, tsr);
OFFSET(REGS_ORIG_A4, pt_regs, orig_a4);
DEFINE(REGS__END, sizeof(struct pt_regs));
BLANK();
OFFSET(THREAD_PC, thread_struct, pc);
OFFSET(THREAD_B15_14, thread_struct, b15_14);
OFFSET(THREAD_A15_14, thread_struct, a15_14);
OFFSET(THREAD_B13_12, thread_struct, b13_12);
OFFSET(THREAD_A13_12, thread_struct, a13_12);
OFFSET(THREAD_B11_10, thread_struct, b11_10);
OFFSET(THREAD_A11_10, thread_struct, a11_10);
OFFSET(THREAD_RICL_ICL, thread_struct, ricl_icl);
BLANK();
OFFSET(TASK_STATE, task_struct, state);
BLANK();
OFFSET(THREAD_INFO_FLAGS, thread_info, flags);
OFFSET(THREAD_INFO_PREEMPT_COUNT, thread_info, preempt_count);
BLANK();
/* These would be unneccessary if we ran asm files
* through the preprocessor.
*/
DEFINE(KTHREAD_SHIFT, THREAD_SHIFT);
DEFINE(KTHREAD_START_SP, THREAD_START_SP);
DEFINE(ENOSYS_, ENOSYS);
DEFINE(NR_SYSCALLS_, __NR_syscalls);
DEFINE(_TIF_SYSCALL_TRACE, (1<<TIF_SYSCALL_TRACE));
DEFINE(_TIF_NOTIFY_RESUME, (1<<TIF_NOTIFY_RESUME));
DEFINE(_TIF_SIGPENDING, (1<<TIF_SIGPENDING));
DEFINE(_TIF_NEED_RESCHED, (1<<TIF_NEED_RESCHED));
DEFINE(_TIF_NOTIFY_SIGNAL, (1<<TIF_NOTIFY_SIGNAL));
DEFINE(_TIF_ALLWORK_MASK, TIF_ALLWORK_MASK);
DEFINE(_TIF_WORK_MASK, TIF_WORK_MASK);
}
// SPDX-License-Identifier: GPL-2.0-only
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#include <linux/module.h>
#include <asm/checksum.h>
#include <linux/io.h>
/*
* libgcc functions - used internally by the compiler...
*/
extern int __c6xabi_divi(int dividend, int divisor);
EXPORT_SYMBOL(__c6xabi_divi);
extern unsigned __c6xabi_divu(unsigned dividend, unsigned divisor);
EXPORT_SYMBOL(__c6xabi_divu);
extern int __c6xabi_remi(int dividend, int divisor);
EXPORT_SYMBOL(__c6xabi_remi);
extern unsigned __c6xabi_remu(unsigned dividend, unsigned divisor);
EXPORT_SYMBOL(__c6xabi_remu);
extern int __c6xabi_divremi(int dividend, int divisor);
EXPORT_SYMBOL(__c6xabi_divremi);
extern unsigned __c6xabi_divremu(unsigned dividend, unsigned divisor);
EXPORT_SYMBOL(__c6xabi_divremu);
extern unsigned long long __c6xabi_mpyll(unsigned long long src1,
unsigned long long src2);
EXPORT_SYMBOL(__c6xabi_mpyll);
extern long long __c6xabi_negll(long long src);
EXPORT_SYMBOL(__c6xabi_negll);
extern unsigned long long __c6xabi_llshl(unsigned long long src1, uint src2);
EXPORT_SYMBOL(__c6xabi_llshl);
extern long long __c6xabi_llshr(long long src1, uint src2);
EXPORT_SYMBOL(__c6xabi_llshr);
extern unsigned long long __c6xabi_llshru(unsigned long long src1, uint src2);
EXPORT_SYMBOL(__c6xabi_llshru);
extern void __c6xabi_strasgi(int *dst, const int *src, unsigned cnt);
EXPORT_SYMBOL(__c6xabi_strasgi);
extern void __c6xabi_push_rts(void);
EXPORT_SYMBOL(__c6xabi_push_rts);
extern void __c6xabi_pop_rts(void);
EXPORT_SYMBOL(__c6xabi_pop_rts);
extern void __c6xabi_strasgi_64plus(int *dst, const int *src, unsigned cnt);
EXPORT_SYMBOL(__c6xabi_strasgi_64plus);
/* lib functions */
EXPORT_SYMBOL(memcpy);
// SPDX-License-Identifier: GPL-2.0-only
/*
* Architecture specific OF callbacks.
*
* Copyright (C) 2011 Texas Instruments Incorporated
* Author: Mark Salter <msalter@redhat.com>
*/
#include <linux/init.h>
#include <linux/memblock.h>
void __init early_init_dt_add_memory_arch(u64 base, u64 size)
{
c6x_add_memory(base, size);
}
; SPDX-License-Identifier: GPL-2.0-only
;
; Port on Texas Instruments TMS320C6x architecture
;
; Copyright (C) 2004-2011 Texas Instruments Incorporated
; Author: Aurelien Jacquiot (aurelien.jacquiot@virtuallogix.com)
; Updated for 2.6.34: Mark Salter <msalter@redhat.com>
;
#include <linux/sys.h>
#include <linux/linkage.h>
#include <asm/thread_info.h>
#include <asm/asm-offsets.h>
#include <asm/unistd.h>
#include <asm/errno.h>
; Registers naming
#define DP B14
#define SP B15
#ifndef CONFIG_PREEMPTION
#define resume_kernel restore_all
#endif
.altmacro
.macro MASK_INT reg
MVC .S2 CSR,reg
CLR .S2 reg,0,0,reg
MVC .S2 reg,CSR
.endm
.macro UNMASK_INT reg
MVC .S2 CSR,reg
SET .S2 reg,0,0,reg
MVC .S2 reg,CSR
.endm
.macro GET_THREAD_INFO reg
SHR .S1X SP,THREAD_SHIFT,reg
SHL .S1 reg,THREAD_SHIFT,reg
.endm
;;
;; This defines the normal kernel pt_regs layout.
;;
.macro SAVE_ALL __rp __tsr
STW .D2T2 B0,*SP--[2] ; save original B0
MVKL .S2 current_ksp,B0
MVKH .S2 current_ksp,B0
LDW .D2T2 *B0,B1 ; KSP
NOP 3
STW .D2T2 B1,*+SP[1] ; save original B1
XOR .D2 SP,B1,B0 ; (SP ^ KSP)
LDW .D2T2 *+SP[1],B1 ; restore B0/B1
LDW .D2T2 *++SP[2],B0
SHR .S2 B0,THREAD_SHIFT,B0 ; 0 if already using kstack
[B0] STDW .D2T2 SP:DP,*--B1[1] ; user: save user sp/dp kstack
[B0] MV .S2 B1,SP ; and switch to kstack
||[!B0] STDW .D2T2 SP:DP,*--SP[1] ; kernel: save on current stack
SUBAW .D2 SP,2,SP
ADD .D1X SP,-8,A15
|| STDW .D2T1 A15:A14,*SP--[16] ; save A15:A14
STDW .D2T2 B13:B12,*SP--[1]
|| STDW .D1T1 A13:A12,*A15--[1]
|| MVC .S2 __rp,B13
STDW .D2T2 B11:B10,*SP--[1]
|| STDW .D1T1 A11:A10,*A15--[1]
|| MVC .S2 CSR,B12
STDW .D2T2 B9:B8,*SP--[1]
|| STDW .D1T1 A9:A8,*A15--[1]
|| MVC .S2 RILC,B11
STDW .D2T2 B7:B6,*SP--[1]
|| STDW .D1T1 A7:A6,*A15--[1]
|| MVC .S2 ILC,B10
STDW .D2T2 B5:B4,*SP--[1]
|| STDW .D1T1 A5:A4,*A15--[1]
STDW .D2T2 B3:B2,*SP--[1]
|| STDW .D1T1 A3:A2,*A15--[1]
|| MVC .S2 __tsr,B5
STDW .D2T2 B1:B0,*SP--[1]
|| STDW .D1T1 A1:A0,*A15--[1]
|| MV .S1X B5,A5
STDW .D2T2 B31:B30,*SP--[1]
|| STDW .D1T1 A31:A30,*A15--[1]
STDW .D2T2 B29:B28,*SP--[1]
|| STDW .D1T1 A29:A28,*A15--[1]
STDW .D2T2 B27:B26,*SP--[1]
|| STDW .D1T1 A27:A26,*A15--[1]
STDW .D2T2 B25:B24,*SP--[1]
|| STDW .D1T1 A25:A24,*A15--[1]
STDW .D2T2 B23:B22,*SP--[1]
|| STDW .D1T1 A23:A22,*A15--[1]
STDW .D2T2 B21:B20,*SP--[1]
|| STDW .D1T1 A21:A20,*A15--[1]
STDW .D2T2 B19:B18,*SP--[1]
|| STDW .D1T1 A19:A18,*A15--[1]
STDW .D2T2 B17:B16,*SP--[1]
|| STDW .D1T1 A17:A16,*A15--[1]
STDW .D2T2 B13:B12,*SP--[1] ; save PC and CSR
STDW .D2T2 B11:B10,*SP--[1] ; save RILC and ILC
STDW .D2T1 A5:A4,*SP--[1] ; save TSR and orig A4
;; We left an unused word on the stack just above pt_regs.
;; It is used to save whether or not this frame is due to
;; a syscall. It is cleared here, but the syscall handler
;; sets it to a non-zero value.
MVK .L2 0,B1
STW .D2T2 B1,*+SP(REGS__END+8) ; clear syscall flag
.endm
.macro RESTORE_ALL __rp __tsr
LDDW .D2T2 *++SP[1],B9:B8 ; get TSR (B9)
LDDW .D2T2 *++SP[1],B11:B10 ; get RILC (B11) and ILC (B10)
LDDW .D2T2 *++SP[1],B13:B12 ; get PC (B13) and CSR (B12)
ADDAW .D1X SP,30,A15
LDDW .D1T1 *++A15[1],A17:A16
|| LDDW .D2T2 *++SP[1],B17:B16
LDDW .D1T1 *++A15[1],A19:A18
|| LDDW .D2T2 *++SP[1],B19:B18
LDDW .D1T1 *++A15[1],A21:A20
|| LDDW .D2T2 *++SP[1],B21:B20
LDDW .D1T1 *++A15[1],A23:A22
|| LDDW .D2T2 *++SP[1],B23:B22
LDDW .D1T1 *++A15[1],A25:A24
|| LDDW .D2T2 *++SP[1],B25:B24
LDDW .D1T1 *++A15[1],A27:A26
|| LDDW .D2T2 *++SP[1],B27:B26
LDDW .D1T1 *++A15[1],A29:A28
|| LDDW .D2T2 *++SP[1],B29:B28
LDDW .D1T1 *++A15[1],A31:A30
|| LDDW .D2T2 *++SP[1],B31:B30
LDDW .D1T1 *++A15[1],A1:A0
|| LDDW .D2T2 *++SP[1],B1:B0
LDDW .D1T1 *++A15[1],A3:A2
|| LDDW .D2T2 *++SP[1],B3:B2
|| MVC .S2 B9,__tsr
LDDW .D1T1 *++A15[1],A5:A4
|| LDDW .D2T2 *++SP[1],B5:B4
|| MVC .S2 B11,RILC
LDDW .D1T1 *++A15[1],A7:A6
|| LDDW .D2T2 *++SP[1],B7:B6
|| MVC .S2 B10,ILC
LDDW .D1T1 *++A15[1],A9:A8
|| LDDW .D2T2 *++SP[1],B9:B8
|| MVC .S2 B13,__rp
LDDW .D1T1 *++A15[1],A11:A10
|| LDDW .D2T2 *++SP[1],B11:B10
|| MVC .S2 B12,CSR
LDDW .D1T1 *++A15[1],A13:A12
|| LDDW .D2T2 *++SP[1],B13:B12
MV .D2X A15,SP
|| MVKL .S1 current_ksp,A15
MVKH .S1 current_ksp,A15
|| ADDAW .D1X SP,6,A14
STW .D1T1 A14,*A15 ; save kernel stack pointer
LDDW .D2T1 *++SP[1],A15:A14
B .S2 __rp ; return from interruption
LDDW .D2T2 *+SP[1],SP:DP
NOP 4
.endm
.section .text
;;
;; Jump to schedule() then return to ret_from_exception
;;
_reschedule:
#ifdef CONFIG_C6X_BIG_KERNEL
MVKL .S1 schedule,A0
MVKH .S1 schedule,A0
B .S2X A0
#else
B .S1 schedule
#endif
ADDKPC .S2 ret_from_exception,B3,4
;;
;; Called before syscall handler when process is being debugged
;;
tracesys_on:
#ifdef CONFIG_C6X_BIG_KERNEL
MVKL .S1 syscall_trace_entry,A0
MVKH .S1 syscall_trace_entry,A0
B .S2X A0
#else
B .S1 syscall_trace_entry
#endif
ADDKPC .S2 ret_from_syscall_trace,B3,3
ADD .S1X 8,SP,A4
ret_from_syscall_trace:
;; tracing returns (possibly new) syscall number
MV .D2X A4,B0
|| MVK .S2 __NR_syscalls,B1
CMPLTU .L2 B0,B1,B1
[!B1] BNOP .S2 ret_from_syscall_function,5
|| MVK .S1 -ENOSYS,A4
;; reload syscall args from (possibly modified) stack frame
;; and get syscall handler addr from sys_call_table:
LDW .D2T2 *+SP(REGS_B4+8),B4
|| MVKL .S2 sys_call_table,B1
LDW .D2T1 *+SP(REGS_A6+8),A6
|| MVKH .S2 sys_call_table,B1
LDW .D2T2 *+B1[B0],B0
|| MVKL .S2 ret_from_syscall_function,B3
LDW .D2T2 *+SP(REGS_B6+8),B6
|| MVKH .S2 ret_from_syscall_function,B3
LDW .D2T1 *+SP(REGS_A8+8),A8
LDW .D2T2 *+SP(REGS_B8+8),B8
NOP
; B0 = sys_call_table[__NR_*]
BNOP .S2 B0,5 ; branch to syscall handler
|| LDW .D2T1 *+SP(REGS_ORIG_A4+8),A4
syscall_exit_work:
AND .D1 _TIF_SYSCALL_TRACE,A2,A0
[!A0] BNOP .S1 work_pending,5
[A0] B .S2 syscall_trace_exit
ADDKPC .S2 resume_userspace,B3,1
MVC .S2 CSR,B1
SET .S2 B1,0,0,B1
MVC .S2 B1,CSR ; enable ints
work_pending:
AND .D1 _TIF_NEED_RESCHED,A2,A0
[!A0] BNOP .S1 work_notifysig,5
work_resched:
#ifdef CONFIG_C6X_BIG_KERNEL
MVKL .S1 schedule,A1
MVKH .S1 schedule,A1
B .S2X A1
#else
B .S2 schedule
#endif
ADDKPC .S2 work_rescheduled,B3,4
work_rescheduled:
;; make sure we don't miss an interrupt setting need_resched or
;; sigpending between sampling and the rti
MASK_INT B2
GET_THREAD_INFO A12
LDW .D1T1 *+A12(THREAD_INFO_FLAGS),A2
MVK .S1 _TIF_WORK_MASK,A1
MVK .S1 _TIF_NEED_RESCHED,A3
NOP 2
AND .D1 A1,A2,A0
|| AND .S1 A3,A2,A1
[!A0] BNOP .S1 restore_all,5
[A1] BNOP .S1 work_resched,5
work_notifysig:
;; enable interrupts for do_notify_resume()
UNMASK_INT B2
B .S2 do_notify_resume
LDW .D2T1 *+SP(REGS__END+8),A6 ; syscall flag
ADDKPC .S2 resume_userspace,B3,1
ADD .S1X 8,SP,A4 ; pt_regs pointer is first arg
MV .D2X A2,B4 ; thread_info flags is second arg
;;
;; On C64x+, the return way from exception and interrupt
;; is a little bit different
;;
ENTRY(ret_from_exception)
#ifdef CONFIG_PREEMPTION
MASK_INT B2
#endif
ENTRY(ret_from_interrupt)
;;
;; Check if we are comming from user mode.
;;
LDW .D2T2 *+SP(REGS_TSR+8),B0
MVK .S2 0x40,B1
NOP 3
AND .D2 B0,B1,B0
[!B0] BNOP .S2 resume_kernel,5
resume_userspace:
;; make sure we don't miss an interrupt setting need_resched or
;; sigpending between sampling and the rti
MASK_INT B2
GET_THREAD_INFO A12
LDW .D1T1 *+A12(THREAD_INFO_FLAGS),A2
MVK .S1 _TIF_WORK_MASK,A1
MVK .S1 _TIF_NEED_RESCHED,A3
NOP 2
AND .D1 A1,A2,A0
[A0] BNOP .S1 work_pending,5
BNOP .S1 restore_all,5
;;
;; System call handling
;; B0 = syscall number (in sys_call_table)
;; A4,B4,A6,B6,A8,B8 = arguments of the syscall function
;; A4 is the return value register
;;
system_call_saved:
MVK .L2 1,B2
STW .D2T2 B2,*+SP(REGS__END+8) ; set syscall flag
MVC .S2 B2,ECR ; ack the software exception
UNMASK_INT B2 ; re-enable global IT
system_call_saved_noack:
;; Check system call number
MVK .S2 __NR_syscalls,B1
#ifdef CONFIG_C6X_BIG_KERNEL
|| MVKL .S1 sys_ni_syscall,A0
#endif
CMPLTU .L2 B0,B1,B1
#ifdef CONFIG_C6X_BIG_KERNEL
|| MVKH .S1 sys_ni_syscall,A0
#endif
;; Check for ptrace
GET_THREAD_INFO A12
#ifdef CONFIG_C6X_BIG_KERNEL
[!B1] B .S2X A0
#else
[!B1] B .S2 sys_ni_syscall
#endif
[!B1] ADDKPC .S2 ret_from_syscall_function,B3,4
;; Get syscall handler addr from sys_call_table
;; call tracesys_on or call syscall handler
LDW .D1T1 *+A12(THREAD_INFO_FLAGS),A2
|| MVKL .S2 sys_call_table,B1
MVKH .S2 sys_call_table,B1
LDW .D2T2 *+B1[B0],B0
NOP 2
; A2 = thread_info flags
AND .D1 _TIF_SYSCALL_TRACE,A2,A2
[A2] BNOP .S1 tracesys_on,5
;; B0 = _sys_call_table[__NR_*]
B .S2 B0
ADDKPC .S2 ret_from_syscall_function,B3,4
ret_from_syscall_function:
STW .D2T1 A4,*+SP(REGS_A4+8) ; save return value in A4
; original A4 is in orig_A4
syscall_exit:
;; make sure we don't miss an interrupt setting need_resched or
;; sigpending between sampling and the rti
MASK_INT B2
LDW .D1T1 *+A12(THREAD_INFO_FLAGS),A2
MVK .S1 _TIF_ALLWORK_MASK,A1
NOP 3
AND .D1 A1,A2,A2 ; check for work to do
[A2] BNOP .S1 syscall_exit_work,5
restore_all:
RESTORE_ALL NRP,NTSR
;;
;; After a fork we jump here directly from resume,
;; so that A4 contains the previous task structure.
;;
ENTRY(ret_from_fork)
#ifdef CONFIG_C6X_BIG_KERNEL
MVKL .S1 schedule_tail,A0
MVKH .S1 schedule_tail,A0
B .S2X A0
#else
B .S2 schedule_tail
#endif
ADDKPC .S2 ret_from_fork_2,B3,4
ret_from_fork_2:
;; return 0 in A4 for child process
GET_THREAD_INFO A12
BNOP .S2 syscall_exit,3
MVK .L2 0,B0
STW .D2T2 B0,*+SP(REGS_A4+8)
ENDPROC(ret_from_fork)
ENTRY(ret_from_kernel_thread)
#ifdef CONFIG_C6X_BIG_KERNEL
MVKL .S1 schedule_tail,A0
MVKH .S1 schedule_tail,A0
B .S2X A0
#else
B .S2 schedule_tail
#endif
LDW .D2T2 *+SP(REGS_A0+8),B10 /* get fn */
ADDKPC .S2 0f,B3,3
0:
B .S2 B10 /* call fn */
LDW .D2T1 *+SP(REGS_A1+8),A4 /* get arg */
ADDKPC .S2 ret_from_fork_2,B3,3
ENDPROC(ret_from_kernel_thread)
;;
;; These are the interrupt handlers, responsible for calling c6x_do_IRQ()
;;
.macro SAVE_ALL_INT
SAVE_ALL IRP,ITSR
.endm
.macro CALL_INT int
#ifdef CONFIG_C6X_BIG_KERNEL
MVKL .S1 c6x_do_IRQ,A0
MVKH .S1 c6x_do_IRQ,A0
BNOP .S2X A0,1
MVK .S1 int,A4
ADDAW .D2 SP,2,B4
MVKL .S2 ret_from_interrupt,B3
MVKH .S2 ret_from_interrupt,B3
#else
CALLP .S2 c6x_do_IRQ,B3
|| MVK .S1 int,A4
|| ADDAW .D2 SP,2,B4
B .S1 ret_from_interrupt
NOP 5
#endif
.endm
ENTRY(_int4_handler)
SAVE_ALL_INT
CALL_INT 4
ENDPROC(_int4_handler)
ENTRY(_int5_handler)
SAVE_ALL_INT
CALL_INT 5
ENDPROC(_int5_handler)
ENTRY(_int6_handler)
SAVE_ALL_INT
CALL_INT 6
ENDPROC(_int6_handler)
ENTRY(_int7_handler)
SAVE_ALL_INT
CALL_INT 7
ENDPROC(_int7_handler)
ENTRY(_int8_handler)
SAVE_ALL_INT
CALL_INT 8
ENDPROC(_int8_handler)
ENTRY(_int9_handler)
SAVE_ALL_INT
CALL_INT 9
ENDPROC(_int9_handler)
ENTRY(_int10_handler)
SAVE_ALL_INT
CALL_INT 10
ENDPROC(_int10_handler)
ENTRY(_int11_handler)
SAVE_ALL_INT
CALL_INT 11
ENDPROC(_int11_handler)
ENTRY(_int12_handler)
SAVE_ALL_INT
CALL_INT 12
ENDPROC(_int12_handler)
ENTRY(_int13_handler)
SAVE_ALL_INT
CALL_INT 13
ENDPROC(_int13_handler)
ENTRY(_int14_handler)
SAVE_ALL_INT
CALL_INT 14
ENDPROC(_int14_handler)
ENTRY(_int15_handler)
SAVE_ALL_INT
CALL_INT 15
ENDPROC(_int15_handler)
;;
;; Handler for uninitialized and spurious interrupts
;;
ENTRY(_bad_interrupt)
B .S2 IRP
NOP 5
ENDPROC(_bad_interrupt)
;;
;; Entry for NMI/exceptions/syscall
;;
ENTRY(_nmi_handler)
SAVE_ALL NRP,NTSR
MVC .S2 EFR,B2
CMPEQ .L2 1,B2,B2
|| MVC .S2 TSR,B1
CLR .S2 B1,10,10,B1
MVC .S2 B1,TSR
#ifdef CONFIG_C6X_BIG_KERNEL
[!B2] MVKL .S1 process_exception,A0
[!B2] MVKH .S1 process_exception,A0
[!B2] B .S2X A0
#else
[!B2] B .S2 process_exception
#endif
[B2] B .S2 system_call_saved
[!B2] ADDAW .D2 SP,2,B1
[!B2] MV .D1X B1,A4
ADDKPC .S2 ret_from_trap,B3,2
ret_from_trap:
MV .D2X A4,B0
[!B0] BNOP .S2 ret_from_exception,5
#ifdef CONFIG_C6X_BIG_KERNEL
MVKL .S2 system_call_saved_noack,B3
MVKH .S2 system_call_saved_noack,B3
#endif
LDW .D2T2 *+SP(REGS_B0+8),B0
LDW .D2T1 *+SP(REGS_A4+8),A4
LDW .D2T2 *+SP(REGS_B4+8),B4
LDW .D2T1 *+SP(REGS_A6+8),A6
LDW .D2T2 *+SP(REGS_B6+8),B6
LDW .D2T1 *+SP(REGS_A8+8),A8
#ifdef CONFIG_C6X_BIG_KERNEL
|| B .S2 B3
#else
|| B .S2 system_call_saved_noack
#endif
LDW .D2T2 *+SP(REGS_B8+8),B8
NOP 4
ENDPROC(_nmi_handler)
;;
;; Jump to schedule() then return to ret_from_isr
;;
#ifdef CONFIG_PREEMPTION
resume_kernel:
GET_THREAD_INFO A12
LDW .D1T1 *+A12(THREAD_INFO_PREEMPT_COUNT),A1
NOP 4
[A1] BNOP .S2 restore_all,5
preempt_schedule:
GET_THREAD_INFO A2
LDW .D1T1 *+A2(THREAD_INFO_FLAGS),A1
#ifdef CONFIG_C6X_BIG_KERNEL
MVKL .S2 preempt_schedule_irq,B0
MVKH .S2 preempt_schedule_irq,B0
NOP 2
#else
NOP 4
#endif
AND .D1 _TIF_NEED_RESCHED,A1,A1
[!A1] BNOP .S2 restore_all,5
#ifdef CONFIG_C6X_BIG_KERNEL
B .S2 B0
#else
B .S2 preempt_schedule_irq
#endif
ADDKPC .S2 preempt_schedule,B3,4
#endif /* CONFIG_PREEMPTION */
ENTRY(enable_exception)
DINT
MVC .S2 TSR,B0
MVC .S2 B3,NRP
MVK .L2 0xc,B1
OR .D2 B0,B1,B0
MVC .S2 B0,TSR ; Set GEE and XEN in TSR
B .S2 NRP
NOP 5
ENDPROC(enable_exception)
;;
;; Special system calls
;; return address is in B3
;;
ENTRY(sys_rt_sigreturn)
ADD .D1X SP,8,A4
#ifdef CONFIG_C6X_BIG_KERNEL
|| MVKL .S1 do_rt_sigreturn,A0
MVKH .S1 do_rt_sigreturn,A0
BNOP .S2X A0,5
#else
|| B .S2 do_rt_sigreturn
NOP 5
#endif
ENDPROC(sys_rt_sigreturn)
ENTRY(sys_pread_c6x)
MV .D2X A8,B7
#ifdef CONFIG_C6X_BIG_KERNEL
|| MVKL .S1 sys_pread64,A0
MVKH .S1 sys_pread64,A0
BNOP .S2X A0,5
#else
|| B .S2 sys_pread64
NOP 5
#endif
ENDPROC(sys_pread_c6x)
ENTRY(sys_pwrite_c6x)
MV .D2X A8,B7
#ifdef CONFIG_C6X_BIG_KERNEL
|| MVKL .S1 sys_pwrite64,A0
MVKH .S1 sys_pwrite64,A0
BNOP .S2X A0,5
#else
|| B .S2 sys_pwrite64
NOP 5
#endif
ENDPROC(sys_pwrite_c6x)
;; On Entry
;; A4 - path
;; B4 - offset_lo (LE), offset_hi (BE)
;; A6 - offset_lo (BE), offset_hi (LE)
ENTRY(sys_truncate64_c6x)
#ifdef CONFIG_CPU_BIG_ENDIAN
MV .S2 B4,B5
MV .D2X A6,B4
#else
MV .D2X A6,B5
#endif
#ifdef CONFIG_C6X_BIG_KERNEL
|| MVKL .S1 sys_truncate64,A0
MVKH .S1 sys_truncate64,A0
BNOP .S2X A0,5
#else
|| B .S2 sys_truncate64
NOP 5
#endif
ENDPROC(sys_truncate64_c6x)
;; On Entry
;; A4 - fd
;; B4 - offset_lo (LE), offset_hi (BE)
;; A6 - offset_lo (BE), offset_hi (LE)
ENTRY(sys_ftruncate64_c6x)
#ifdef CONFIG_CPU_BIG_ENDIAN
MV .S2 B4,B5
MV .D2X A6,B4
#else
MV .D2X A6,B5
#endif
#ifdef CONFIG_C6X_BIG_KERNEL
|| MVKL .S1 sys_ftruncate64,A0
MVKH .S1 sys_ftruncate64,A0
BNOP .S2X A0,5
#else
|| B .S2 sys_ftruncate64
NOP 5
#endif
ENDPROC(sys_ftruncate64_c6x)
;; On Entry
;; A4 - fd
;; B4 - offset_lo (LE), offset_hi (BE)
;; A6 - offset_lo (BE), offset_hi (LE)
;; B6 - len_lo (LE), len_hi (BE)
;; A8 - len_lo (BE), len_hi (LE)
;; B8 - advice
ENTRY(sys_fadvise64_64_c6x)
#ifdef CONFIG_C6X_BIG_KERNEL
MVKL .S1 sys_fadvise64_64,A0
MVKH .S1 sys_fadvise64_64,A0
BNOP .S2X A0,2
#else
B .S2 sys_fadvise64_64
NOP 2
#endif
#ifdef CONFIG_CPU_BIG_ENDIAN
MV .L2 B4,B5
|| MV .D2X A6,B4
MV .L1 A8,A6
|| MV .D1X B6,A7
#else
MV .D2X A6,B5
MV .L1 A8,A7
|| MV .D1X B6,A6
#endif
MV .L2 B8,B6
ENDPROC(sys_fadvise64_64_c6x)
;; On Entry
;; A4 - fd
;; B4 - mode
;; A6 - offset_hi
;; B6 - offset_lo
;; A8 - len_hi
;; B8 - len_lo
ENTRY(sys_fallocate_c6x)
#ifdef CONFIG_C6X_BIG_KERNEL
MVKL .S1 sys_fallocate,A0
MVKH .S1 sys_fallocate,A0
BNOP .S2X A0,1
#else
B .S2 sys_fallocate
NOP
#endif
MV .D1 A6,A7
MV .D1X B6,A6
MV .D2X A8,B7
MV .D2 B8,B6
ENDPROC(sys_fallocate_c6x)
;; put this in .neardata for faster access when using DSBT mode
.section .neardata,"aw",@progbits
.global current_ksp
.hidden current_ksp
current_ksp:
.word init_thread_union + THREAD_START_SP
; SPDX-License-Identifier: GPL-2.0-only
;
; Port on Texas Instruments TMS320C6x architecture
;
; Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated
; Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
;
#include <linux/linkage.h>
#include <linux/of_fdt.h>
#include <asm/asm-offsets.h>
__HEAD
ENTRY(_c_int00)
;; Save magic and pointer
MV .S1 A4,A10
MV .S2 B4,B10
MVKL .S2 __bss_start,B5
MVKH .S2 __bss_start,B5
MVKL .S2 __bss_stop,B6
MVKH .S2 __bss_stop,B6
SUB .L2 B6,B5,B6 ; bss size
;; Set the stack pointer
MVKL .S2 current_ksp,B0
MVKH .S2 current_ksp,B0
LDW .D2T2 *B0,B15
;; clear bss
SHR .S2 B6,3,B0 ; number of dwords to clear
ZERO .L2 B13
ZERO .L2 B12
bss_loop:
BDEC .S2 bss_loop,B0
NOP 3
CMPLT .L2 B0,0,B1
[!B1] STDW .D2T2 B13:B12,*B5++[1]
NOP 4
AND .D2 ~7,B15,B15
;; Clear GIE and PGIE
MVC .S2 CSR,B2
CLR .S2 B2,0,1,B2
MVC .S2 B2,CSR
MVC .S2 TSR,B2
CLR .S2 B2,0,1,B2
MVC .S2 B2,TSR
MVC .S2 ITSR,B2
CLR .S2 B2,0,1,B2
MVC .S2 B2,ITSR
MVC .S2 NTSR,B2
CLR .S2 B2,0,1,B2
MVC .S2 B2,NTSR
;; pass DTB pointer to machine_init (or zero if none)
MVKL .S1 OF_DT_HEADER,A0
MVKH .S1 OF_DT_HEADER,A0
CMPEQ .L1 A10,A0,A0
[A0] MV .S1X B10,A4
[!A0] MVK .S1 0,A4
#ifdef CONFIG_C6X_BIG_KERNEL
MVKL .S1 machine_init,A0
MVKH .S1 machine_init,A0
B .S2X A0
ADDKPC .S2 0f,B3,4
0:
#else
CALLP .S2 machine_init,B3
#endif
;; Jump to Linux init
#ifdef CONFIG_C6X_BIG_KERNEL
MVKL .S1 start_kernel,A0
MVKH .S1 start_kernel,A0
B .S2X A0
#else
B .S2 start_kernel
#endif
NOP 5
L1: BNOP .S2 L1,5
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright (C) 2011-2012 Texas Instruments Incorporated
*
* This borrows heavily from powerpc version, which is:
*
* Derived from arch/i386/kernel/irq.c
* Copyright (C) 1992 Linus Torvalds
* Adapted from arch/i386 by Gary Thomas
* Copyright (C) 1995-1996 Gary Thomas (gdt@linuxppc.org)
* Updated and modified by Cort Dougan <cort@fsmlabs.com>
* Copyright (C) 1996-2001 Cort Dougan
* Adapted for Power Macintosh by Paul Mackerras
* Copyright (C) 1996 Paul Mackerras (paulus@cs.anu.edu.au)
*/
#include <linux/slab.h>
#include <linux/seq_file.h>
#include <linux/radix-tree.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/interrupt.h>
#include <linux/kernel_stat.h>
#include <asm/megamod-pic.h>
#include <asm/special_insns.h>
unsigned long irq_err_count;
static DEFINE_RAW_SPINLOCK(core_irq_lock);
static void mask_core_irq(struct irq_data *data)
{
unsigned int prio = data->hwirq;
raw_spin_lock(&core_irq_lock);
and_creg(IER, ~(1 << prio));
raw_spin_unlock(&core_irq_lock);
}
static void unmask_core_irq(struct irq_data *data)
{
unsigned int prio = data->hwirq;
raw_spin_lock(&core_irq_lock);
or_creg(IER, 1 << prio);
raw_spin_unlock(&core_irq_lock);
}
static struct irq_chip core_chip = {
.name = "core",
.irq_mask = mask_core_irq,
.irq_unmask = unmask_core_irq,
};
static int prio_to_virq[NR_PRIORITY_IRQS];
asmlinkage void c6x_do_IRQ(unsigned int prio, struct pt_regs *regs)
{
struct pt_regs *old_regs = set_irq_regs(regs);
irq_enter();
generic_handle_irq(prio_to_virq[prio]);
irq_exit();
set_irq_regs(old_regs);
}
static struct irq_domain *core_domain;
static int core_domain_map(struct irq_domain *h, unsigned int virq,
irq_hw_number_t hw)
{
if (hw < 4 || hw >= NR_PRIORITY_IRQS)
return -EINVAL;
prio_to_virq[hw] = virq;
irq_set_status_flags(virq, IRQ_LEVEL);
irq_set_chip_and_handler(virq, &core_chip, handle_level_irq);
return 0;
}
static const struct irq_domain_ops core_domain_ops = {
.map = core_domain_map,
.xlate = irq_domain_xlate_onecell,
};
void __init init_IRQ(void)
{
struct device_node *np;
/* Mask all priority IRQs */
and_creg(IER, ~0xfff0);
np = of_find_compatible_node(NULL, NULL, "ti,c64x+core-pic");
if (np != NULL) {
/* create the core host */
core_domain = irq_domain_add_linear(np, NR_PRIORITY_IRQS,
&core_domain_ops, NULL);
if (core_domain)
irq_set_default_host(core_domain);
of_node_put(np);
}
printk(KERN_INFO "Core interrupt controller initialized\n");
/* now we're ready for other SoC controllers */
megamod_pic_init();
/* Clear all general IRQ flags */
set_creg(ICR, 0xfff0);
}
void ack_bad_irq(int irq)
{
printk(KERN_ERR "IRQ: spurious interrupt %d\n", irq);
irq_err_count++;
}
int arch_show_interrupts(struct seq_file *p, int prec)
{
seq_printf(p, "%*s: %10lu\n", prec, "Err", irq_err_count);
return 0;
}
// SPDX-License-Identifier: GPL-2.0-only
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2005, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Thomas Charleux (thomas.charleux@jaluna.com)
*/
#include <linux/moduleloader.h>
#include <linux/elf.h>
#include <linux/vmalloc.h>
#include <linux/kernel.h>
static inline int fixup_pcr(u32 *ip, Elf32_Addr dest, u32 maskbits, int shift)
{
u32 opcode;
long ep = (long)ip & ~31;
long delta = ((long)dest - ep) >> 2;
long mask = (1 << maskbits) - 1;
if ((delta >> (maskbits - 1)) == 0 ||
(delta >> (maskbits - 1)) == -1) {
opcode = *ip;
opcode &= ~(mask << shift);
opcode |= ((delta & mask) << shift);
*ip = opcode;
pr_debug("REL PCR_S%d[%p] dest[%p] opcode[%08x]\n",
maskbits, ip, (void *)dest, opcode);
return 0;
}
pr_err("PCR_S%d reloc %p -> %p out of range!\n",
maskbits, ip, (void *)dest);
return -1;
}
/*
* apply a RELA relocation
*/
int apply_relocate_add(Elf32_Shdr *sechdrs,
const char *strtab,
unsigned int symindex,
unsigned int relsec,
struct module *me)
{
Elf32_Rela *rel = (void *) sechdrs[relsec].sh_addr;
Elf_Sym *sym;
u32 *location, opcode;
unsigned int i;
Elf32_Addr v;
Elf_Addr offset = 0;
pr_debug("Applying relocate section %u to %u with offset 0x%x\n",
relsec, sechdrs[relsec].sh_info, offset);
for (i = 0; i < sechdrs[relsec].sh_size / sizeof(*rel); i++) {
/* This is where to make the change */
location = (void *)sechdrs[sechdrs[relsec].sh_info].sh_addr
+ rel[i].r_offset - offset;
/* This is the symbol it is referring to. Note that all
undefined symbols have been resolved. */
sym = (Elf_Sym *)sechdrs[symindex].sh_addr
+ ELF32_R_SYM(rel[i].r_info);
/* this is the adjustment to be made */
v = sym->st_value + rel[i].r_addend;
switch (ELF32_R_TYPE(rel[i].r_info)) {
case R_C6000_ABS32:
pr_debug("RELA ABS32: [%p] = 0x%x\n", location, v);
*location = v;
break;
case R_C6000_ABS16:
pr_debug("RELA ABS16: [%p] = 0x%x\n", location, v);
*(u16 *)location = v;
break;
case R_C6000_ABS8:
pr_debug("RELA ABS8: [%p] = 0x%x\n", location, v);
*(u8 *)location = v;
break;
case R_C6000_ABS_L16:
opcode = *location;
opcode &= ~0x7fff80;
opcode |= ((v & 0xffff) << 7);
pr_debug("RELA ABS_L16[%p] v[0x%x] opcode[0x%x]\n",
location, v, opcode);
*location = opcode;
break;
case R_C6000_ABS_H16:
opcode = *location;
opcode &= ~0x7fff80;
opcode |= ((v >> 9) & 0x7fff80);
pr_debug("RELA ABS_H16[%p] v[0x%x] opcode[0x%x]\n",
location, v, opcode);
*location = opcode;
break;
case R_C6000_PCR_S21:
if (fixup_pcr(location, v, 21, 7))
return -ENOEXEC;
break;
case R_C6000_PCR_S12:
if (fixup_pcr(location, v, 12, 16))
return -ENOEXEC;
break;
case R_C6000_PCR_S10:
if (fixup_pcr(location, v, 10, 13))
return -ENOEXEC;
break;
default:
pr_err("module %s: Unknown RELA relocation: %u\n",
me->name, ELF32_R_TYPE(rel[i].r_info));
return -ENOEXEC;
}
}
return 0;
}
// SPDX-License-Identifier: GPL-2.0-only
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2006, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#include <linux/module.h>
#include <linux/unistd.h>
#include <linux/ptrace.h>
#include <linux/init_task.h>
#include <linux/tick.h>
#include <linux/mqueue.h>
#include <linux/syscalls.h>
#include <linux/reboot.h>
#include <linux/sched/task.h>
#include <linux/sched/task_stack.h>
#include <asm/syscalls.h>
/* hooks for board specific support */
void (*c6x_restart)(void);
void (*c6x_halt)(void);
extern asmlinkage void ret_from_fork(void);
extern asmlinkage void ret_from_kernel_thread(void);
/*
* power off function, if any
*/
void (*pm_power_off)(void);
EXPORT_SYMBOL(pm_power_off);
void arch_cpu_idle(void)
{
unsigned long tmp;
/*
* Put local_irq_enable and idle in same execute packet
* to make them atomic and avoid race to idle with
* interrupts enabled.
*/
asm volatile (" mvc .s2 CSR,%0\n"
" or .d2 1,%0,%0\n"
" mvc .s2 %0,CSR\n"
"|| idle\n"
: "=b"(tmp));
}
static void halt_loop(void)
{
printk(KERN_EMERG "System Halted, OK to turn off power\n");
local_irq_disable();
while (1)
asm volatile("idle\n");
}
void machine_restart(char *__unused)
{
if (c6x_restart)
c6x_restart();
halt_loop();
}
void machine_halt(void)
{
if (c6x_halt)
c6x_halt();
halt_loop();
}
void machine_power_off(void)
{
if (pm_power_off)
pm_power_off();
halt_loop();
}
void flush_thread(void)
{
}
/*
* Do necessary setup to start up a newly executed thread.
*/
void start_thread(struct pt_regs *regs, unsigned int pc, unsigned long usp)
{
/*
* The binfmt loader will setup a "full" stack, but the C6X
* operates an "empty" stack. So we adjust the usp so that
* argc doesn't get destroyed if an interrupt is taken before
* it is read from the stack.
*
* NB: Library startup code needs to match this.
*/
usp -= 8;
regs->pc = pc;
regs->sp = usp;
regs->tsr |= 0x40; /* set user mode */
current->thread.usp = usp;
}
/*
* Copy a new thread context in its stack.
*/
int copy_thread(unsigned long clone_flags, unsigned long usp,
unsigned long ustk_size, struct task_struct *p,
unsigned long tls)
{
struct pt_regs *childregs;
childregs = task_pt_regs(p);
if (unlikely(p->flags & PF_KTHREAD)) {
/* case of __kernel_thread: we return to supervisor space */
memset(childregs, 0, sizeof(struct pt_regs));
childregs->sp = (unsigned long)(childregs + 1);
p->thread.pc = (unsigned long) ret_from_kernel_thread;
childregs->a0 = usp; /* function */
childregs->a1 = ustk_size; /* argument */
} else {
/* Otherwise use the given stack */
*childregs = *current_pt_regs();
if (usp)
childregs->sp = usp;
p->thread.pc = (unsigned long) ret_from_fork;
}
/* Set usp/ksp */
p->thread.usp = childregs->sp;
thread_saved_ksp(p) = (unsigned long)childregs - 8;
p->thread.wchan = p->thread.pc;
#ifdef __DSBT__
{
unsigned long dp;
asm volatile ("mv .S2 b14,%0\n" : "=b"(dp));
thread_saved_dp(p) = dp;
if (usp == -1)
childregs->dp = dp;
}
#endif
return 0;
}
unsigned long get_wchan(struct task_struct *p)
{
return p->thread.wchan;
}
// SPDX-License-Identifier: GPL-2.0-only
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2006, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*
* Updated for 2.6.34: Mark Salter <msalter@redhat.com>
*/
#include <linux/ptrace.h>
#include <linux/tracehook.h>
#include <linux/regset.h>
#include <linux/elf.h>
#include <linux/sched/task_stack.h>
#include <asm/cacheflush.h>
#define PT_REG_SIZE (sizeof(struct pt_regs))
/*
* Called by kernel/ptrace.c when detaching.
*/
void ptrace_disable(struct task_struct *child)
{
/* nothing to do */
}
/*
* Get a register number from live pt_regs for the specified task.
*/
static inline long get_reg(struct task_struct *task, int regno)
{
long *addr = (long *)task_pt_regs(task);
if (regno == PT_TSR || regno == PT_CSR)
return 0;
return addr[regno];
}
/*
* Write contents of register REGNO in task TASK.
*/
static inline int put_reg(struct task_struct *task,
int regno,
unsigned long data)
{
unsigned long *addr = (unsigned long *)task_pt_regs(task);
if (regno != PT_TSR && regno != PT_CSR)
addr[regno] = data;
return 0;
}
/* regset get/set implementations */
static int gpr_get(struct task_struct *target,
const struct user_regset *regset,
struct membuf to)
{
return membuf_write(&to, task_pt_regs(target), sizeof(struct pt_regs));
}
enum c6x_regset {
REGSET_GPR,
};
static const struct user_regset c6x_regsets[] = {
[REGSET_GPR] = {
.core_note_type = NT_PRSTATUS,
.n = ELF_NGREG,
.size = sizeof(u32),
.align = sizeof(u32),
.regset_get = gpr_get,
},
};
static const struct user_regset_view user_c6x_native_view = {
.name = "tic6x",
.e_machine = EM_TI_C6000,
.regsets = c6x_regsets,
.n = ARRAY_SIZE(c6x_regsets),
};
const struct user_regset_view *task_user_regset_view(struct task_struct *task)
{
return &user_c6x_native_view;
}
/*
* Perform ptrace request
*/
long arch_ptrace(struct task_struct *child, long request,
unsigned long addr, unsigned long data)
{
int ret = 0;
switch (request) {
/*
* write the word at location addr.
*/
case PTRACE_POKETEXT:
ret = generic_ptrace_pokedata(child, addr, data);
if (ret == 0 && request == PTRACE_POKETEXT)
flush_icache_range(addr, addr + 4);
break;
default:
ret = ptrace_request(child, request, addr, data);
break;
}
return ret;
}
/*
* handle tracing of system call entry
* - return the revised system call number or ULONG_MAX to cause ENOSYS
*/
asmlinkage unsigned long syscall_trace_entry(struct pt_regs *regs)
{
if (tracehook_report_syscall_entry(regs))
/* tracing decided this syscall should not happen, so
* We'll return a bogus call number to get an ENOSYS
* error, but leave the original number in
* regs->orig_a4
*/
return ULONG_MAX;
return regs->b0;
}
/*
* handle tracing of system call exit
*/
asmlinkage void syscall_trace_exit(struct pt_regs *regs)
{
tracehook_report_syscall_exit(regs, 0);
}
// SPDX-License-Identifier: GPL-2.0-only
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2006, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#include <linux/dma-mapping.h>
#include <linux/memblock.h>
#include <linux/seq_file.h>
#include <linux/clkdev.h>
#include <linux/initrd.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_fdt.h>
#include <linux/string.h>
#include <linux/errno.h>
#include <linux/cache.h>
#include <linux/delay.h>
#include <linux/sched.h>
#include <linux/clk.h>
#include <linux/cpu.h>
#include <linux/fs.h>
#include <linux/of.h>
#include <linux/console.h>
#include <linux/screen_info.h>
#include <asm/sections.h>
#include <asm/div64.h>
#include <asm/setup.h>
#include <asm/dscr.h>
#include <asm/clock.h>
#include <asm/soc.h>
#include <asm/special_insns.h>
static const char *c6x_soc_name;
struct screen_info screen_info;
int c6x_num_cores;
EXPORT_SYMBOL_GPL(c6x_num_cores);
unsigned int c6x_silicon_rev;
EXPORT_SYMBOL_GPL(c6x_silicon_rev);
/*
* Device status register. This holds information
* about device configuration needed by some drivers.
*/
unsigned int c6x_devstat;
EXPORT_SYMBOL_GPL(c6x_devstat);
/*
* Some SoCs have fuse registers holding a unique MAC
* address. This is parsed out of the device tree with
* the resulting MAC being held here.
*/
unsigned char c6x_fuse_mac[6];
unsigned long memory_start;
unsigned long memory_end;
EXPORT_SYMBOL(memory_end);
unsigned long ram_start;
unsigned long ram_end;
/* Uncached memory for DMA consistent use (memdma=) */
static unsigned long dma_start __initdata;
static unsigned long dma_size __initdata;
struct cpuinfo_c6x {
const char *cpu_name;
const char *cpu_voltage;
const char *mmu;
const char *fpu;
char *cpu_rev;
unsigned int core_id;
char __cpu_rev[5];
};
static DEFINE_PER_CPU(struct cpuinfo_c6x, cpu_data);
unsigned int ticks_per_ns_scaled;
EXPORT_SYMBOL(ticks_per_ns_scaled);
unsigned int c6x_core_freq;
static void __init get_cpuinfo(void)
{
unsigned cpu_id, rev_id, csr;
struct clk *coreclk = clk_get_sys(NULL, "core");
unsigned long core_khz;
u64 tmp;
struct cpuinfo_c6x *p;
struct device_node *node;
p = &per_cpu(cpu_data, smp_processor_id());
if (!IS_ERR(coreclk))
c6x_core_freq = clk_get_rate(coreclk);
else {
printk(KERN_WARNING
"Cannot find core clock frequency. Using 700MHz\n");
c6x_core_freq = 700000000;
}
core_khz = c6x_core_freq / 1000;
tmp = (uint64_t)core_khz << C6X_NDELAY_SCALE;
do_div(tmp, 1000000);
ticks_per_ns_scaled = tmp;
csr = get_creg(CSR);
cpu_id = csr >> 24;
rev_id = (csr >> 16) & 0xff;
p->mmu = "none";
p->fpu = "none";
p->cpu_voltage = "unknown";
switch (cpu_id) {
case 0:
p->cpu_name = "C67x";
p->fpu = "yes";
break;
case 2:
p->cpu_name = "C62x";
break;
case 8:
p->cpu_name = "C64x";
break;
case 12:
p->cpu_name = "C64x";
break;
case 16:
p->cpu_name = "C64x+";
p->cpu_voltage = "1.2";
break;
case 21:
p->cpu_name = "C66X";
p->cpu_voltage = "1.2";
break;
default:
p->cpu_name = "unknown";
break;
}
if (cpu_id < 16) {
switch (rev_id) {
case 0x1:
if (cpu_id > 8) {
p->cpu_rev = "DM640/DM641/DM642/DM643";
p->cpu_voltage = "1.2 - 1.4";
} else {
p->cpu_rev = "C6201";
p->cpu_voltage = "2.5";
}
break;
case 0x2:
p->cpu_rev = "C6201B/C6202/C6211";
p->cpu_voltage = "1.8";
break;
case 0x3:
p->cpu_rev = "C6202B/C6203/C6204/C6205";
p->cpu_voltage = "1.5";
break;
case 0x201:
p->cpu_rev = "C6701 revision 0 (early CPU)";
p->cpu_voltage = "1.8";
break;
case 0x202:
p->cpu_rev = "C6701/C6711/C6712";
p->cpu_voltage = "1.8";
break;
case 0x801:
p->cpu_rev = "C64x";
p->cpu_voltage = "1.5";
break;
default:
p->cpu_rev = "unknown";
}
} else {
p->cpu_rev = p->__cpu_rev;
snprintf(p->__cpu_rev, sizeof(p->__cpu_rev), "0x%x", cpu_id);
}
p->core_id = get_coreid();
for_each_of_cpu_node(node)
++c6x_num_cores;
node = of_find_node_by_name(NULL, "soc");
if (node) {
if (of_property_read_string(node, "model", &c6x_soc_name))
c6x_soc_name = "unknown";
of_node_put(node);
} else
c6x_soc_name = "unknown";
printk(KERN_INFO "CPU%d: %s rev %s, %s volts, %uMHz\n",
p->core_id, p->cpu_name, p->cpu_rev,
p->cpu_voltage, c6x_core_freq / 1000000);
}
/*
* Early parsing of the command line
*/
static u32 mem_size __initdata;
/* "mem=" parsing. */
static int __init early_mem(char *p)
{
if (!p)
return -EINVAL;
mem_size = memparse(p, &p);
/* don't remove all of memory when handling "mem={invalid}" */
if (mem_size == 0)
return -EINVAL;
return 0;
}
early_param("mem", early_mem);
/* "memdma=<size>[@<address>]" parsing. */
static int __init early_memdma(char *p)
{
if (!p)
return -EINVAL;
dma_size = memparse(p, &p);
if (*p == '@')
dma_start = memparse(p, &p);
return 0;
}
early_param("memdma", early_memdma);
int __init c6x_add_memory(phys_addr_t start, unsigned long size)
{
static int ram_found __initdata;
/* We only handle one bank (the one with PAGE_OFFSET) for now */
if (ram_found)
return -EINVAL;
if (start > PAGE_OFFSET || PAGE_OFFSET >= (start + size))
return 0;
ram_start = start;
ram_end = start + size;
ram_found = 1;
return 0;
}
/*
* Do early machine setup and device tree parsing. This is called very
* early on the boot process.
*/
notrace void __init machine_init(unsigned long dt_ptr)
{
void *dtb = __va(dt_ptr);
void *fdt = __dtb_start;
/* interrupts must be masked */
set_creg(IER, 2);
/*
* Set the Interrupt Service Table (IST) to the beginning of the
* vector table.
*/
set_ist(_vectors_start);
/*
* dtb is passed in from bootloader.
* fdt is linked in blob.
*/
if (dtb && dtb != fdt)
fdt = dtb;
/* Do some early initialization based on the flat device tree */
early_init_dt_scan(fdt);
parse_early_param();
}
void __init setup_arch(char **cmdline_p)
{
phys_addr_t start, end;
u64 i;
printk(KERN_INFO "Initializing kernel\n");
/* Initialize command line */
*cmdline_p = boot_command_line;
memory_end = ram_end;
memory_end &= ~(PAGE_SIZE - 1);
if (mem_size && (PAGE_OFFSET + PAGE_ALIGN(mem_size)) < memory_end)
memory_end = PAGE_OFFSET + PAGE_ALIGN(mem_size);
/* add block that this kernel can use */
memblock_add(PAGE_OFFSET, memory_end - PAGE_OFFSET);
/* reserve kernel text/data/bss */
memblock_reserve(PAGE_OFFSET,
PAGE_ALIGN((unsigned long)&_end - PAGE_OFFSET));
if (dma_size) {
/* align to cacheability granularity */
dma_size = CACHE_REGION_END(dma_size);
if (!dma_start)
dma_start = memory_end - dma_size;
/* align to cacheability granularity */
dma_start = CACHE_REGION_START(dma_start);
/* reserve DMA memory taken from kernel memory */
if (memblock_is_region_memory(dma_start, dma_size))
memblock_reserve(dma_start, dma_size);
}
memory_start = PAGE_ALIGN((unsigned int) &_end);
printk(KERN_INFO "Memory Start=%08lx, Memory End=%08lx\n",
memory_start, memory_end);
#ifdef CONFIG_BLK_DEV_INITRD
/*
* Reserve initrd memory if in kernel memory.
*/
if (initrd_start < initrd_end)
if (memblock_is_region_memory(initrd_start,
initrd_end - initrd_start))
memblock_reserve(initrd_start,
initrd_end - initrd_start);
#endif
init_mm.start_code = (unsigned long) &_stext;
init_mm.end_code = (unsigned long) &_etext;
init_mm.end_data = memory_start;
init_mm.brk = memory_start;
unflatten_and_copy_device_tree();
c6x_cache_init();
/* Set the whole external memory as non-cacheable */
disable_caching(ram_start, ram_end - 1);
/* Set caching of external RAM used by Linux */
for_each_mem_range(i, &start, &end)
enable_caching(CACHE_REGION_START(start),
CACHE_REGION_START(end - 1));
#ifdef CONFIG_BLK_DEV_INITRD
/*
* Enable caching for initrd which falls outside kernel memory.
*/
if (initrd_start < initrd_end) {
if (!memblock_is_region_memory(initrd_start,
initrd_end - initrd_start))
enable_caching(CACHE_REGION_START(initrd_start),
CACHE_REGION_START(initrd_end - 1));
}
#endif
/*
* Disable caching for dma coherent memory taken from kernel memory.
*/
if (dma_size && memblock_is_region_memory(dma_start, dma_size))
disable_caching(dma_start,
CACHE_REGION_START(dma_start + dma_size - 1));
/* Initialize the coherent memory allocator */
coherent_mem_init(dma_start, dma_size);
max_low_pfn = PFN_DOWN(memory_end);
min_low_pfn = PFN_UP(memory_start);
max_pfn = max_low_pfn;
max_mapnr = max_low_pfn - min_low_pfn;
/* Get kmalloc into gear */
paging_init();
/*
* Probe for Device State Configuration Registers.
* We have to do this early in case timer needs to be enabled
* through DSCR.
*/
dscr_probe();
/* We do this early for timer and core clock frequency */
c64x_setup_clocks();
/* Get CPU info */
get_cpuinfo();
#if defined(CONFIG_VT) && defined(CONFIG_DUMMY_CONSOLE)
conswitchp = &dummy_con;
#endif
}
#define cpu_to_ptr(n) ((void *)((long)(n)+1))
#define ptr_to_cpu(p) ((long)(p) - 1)
static int show_cpuinfo(struct seq_file *m, void *v)
{
int n = ptr_to_cpu(v);
struct cpuinfo_c6x *p = &per_cpu(cpu_data, n);
if (n == 0) {
seq_printf(m,
"soc\t\t: %s\n"
"soc revision\t: 0x%x\n"
"soc cores\t: %d\n",
c6x_soc_name, c6x_silicon_rev, c6x_num_cores);
}
seq_printf(m,
"\n"
"processor\t: %d\n"
"cpu\t\t: %s\n"
"core revision\t: %s\n"
"core voltage\t: %s\n"
"core id\t\t: %d\n"
"mmu\t\t: %s\n"
"fpu\t\t: %s\n"
"cpu MHz\t\t: %u\n"
"bogomips\t: %lu.%02lu\n\n",
n,
p->cpu_name, p->cpu_rev, p->cpu_voltage,
p->core_id, p->mmu, p->fpu,
(c6x_core_freq + 500000) / 1000000,
(loops_per_jiffy/(500000/HZ)),
(loops_per_jiffy/(5000/HZ))%100);
return 0;
}
static void *c_start(struct seq_file *m, loff_t *pos)
{
return *pos < nr_cpu_ids ? cpu_to_ptr(*pos) : NULL;
}
static void *c_next(struct seq_file *m, void *v, loff_t *pos)
{
++*pos;
return NULL;
}
static void c_stop(struct seq_file *m, void *v)
{
}
const struct seq_operations cpuinfo_op = {
c_start,
c_stop,
c_next,
show_cpuinfo
};
static struct cpu cpu_devices[NR_CPUS];
static int __init topology_init(void)
{
int i;
for_each_present_cpu(i)
register_cpu(&cpu_devices[i], i);
return 0;
}
subsys_initcall(topology_init);
// SPDX-License-Identifier: GPL-2.0-only
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2006, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*
* Updated for 2.6.34: Mark Salter <msalter@redhat.com>
*/
#include <linux/kernel.h>
#include <linux/uaccess.h>
#include <linux/syscalls.h>
#include <linux/tracehook.h>
#include <asm/asm-offsets.h>
#include <asm/ucontext.h>
#include <asm/cacheflush.h>
/*
* Do a signal return, undo the signal stack.
*/
#define RETCODE_SIZE (9 << 2) /* 9 instructions = 36 bytes */
struct rt_sigframe {
struct siginfo __user *pinfo;
void __user *puc;
struct siginfo info;
struct ucontext uc;
unsigned long retcode[RETCODE_SIZE >> 2];
};
static int restore_sigcontext(struct pt_regs *regs,
struct sigcontext __user *sc)
{
int err = 0;
/* The access_ok check was done by caller, so use __get_user here */
#define COPY(x) (err |= __get_user(regs->x, &sc->sc_##x))
COPY(sp); COPY(a4); COPY(b4); COPY(a6); COPY(b6); COPY(a8); COPY(b8);
COPY(a0); COPY(a1); COPY(a2); COPY(a3); COPY(a5); COPY(a7); COPY(a9);
COPY(b0); COPY(b1); COPY(b2); COPY(b3); COPY(b5); COPY(b7); COPY(b9);
COPY(a16); COPY(a17); COPY(a18); COPY(a19);
COPY(a20); COPY(a21); COPY(a22); COPY(a23);
COPY(a24); COPY(a25); COPY(a26); COPY(a27);
COPY(a28); COPY(a29); COPY(a30); COPY(a31);
COPY(b16); COPY(b17); COPY(b18); COPY(b19);
COPY(b20); COPY(b21); COPY(b22); COPY(b23);
COPY(b24); COPY(b25); COPY(b26); COPY(b27);
COPY(b28); COPY(b29); COPY(b30); COPY(b31);
COPY(csr); COPY(pc);
#undef COPY
return err;
}
asmlinkage int do_rt_sigreturn(struct pt_regs *regs)
{
struct rt_sigframe __user *frame;
sigset_t set;
/* Always make any pending restarted system calls return -EINTR */
current->restart_block.fn = do_no_restart_syscall;
/*
* Since we stacked the signal on a dword boundary,
* 'sp' should be dword aligned here. If it's
* not, then the user is trying to mess with us.
*/
if (regs->sp & 7)
goto badframe;
frame = (struct rt_sigframe __user *) ((unsigned long) regs->sp + 8);
if (!access_ok(frame, sizeof(*frame)))
goto badframe;
if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
goto badframe;
set_current_blocked(&set);
if (restore_sigcontext(regs, &frame->uc.uc_mcontext))
goto badframe;
return regs->a4;
badframe:
force_sig(SIGSEGV);
return 0;
}
static int setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs,
unsigned long mask)
{
int err = 0;
err |= __put_user(mask, &sc->sc_mask);
/* The access_ok check was done by caller, so use __put_user here */
#define COPY(x) (err |= __put_user(regs->x, &sc->sc_##x))
COPY(sp); COPY(a4); COPY(b4); COPY(a6); COPY(b6); COPY(a8); COPY(b8);
COPY(a0); COPY(a1); COPY(a2); COPY(a3); COPY(a5); COPY(a7); COPY(a9);
COPY(b0); COPY(b1); COPY(b2); COPY(b3); COPY(b5); COPY(b7); COPY(b9);
COPY(a16); COPY(a17); COPY(a18); COPY(a19);
COPY(a20); COPY(a21); COPY(a22); COPY(a23);
COPY(a24); COPY(a25); COPY(a26); COPY(a27);
COPY(a28); COPY(a29); COPY(a30); COPY(a31);
COPY(b16); COPY(b17); COPY(b18); COPY(b19);
COPY(b20); COPY(b21); COPY(b22); COPY(b23);
COPY(b24); COPY(b25); COPY(b26); COPY(b27);
COPY(b28); COPY(b29); COPY(b30); COPY(b31);
COPY(csr); COPY(pc);
#undef COPY
return err;
}
static inline void __user *get_sigframe(struct ksignal *ksig,
struct pt_regs *regs,
unsigned long framesize)
{
unsigned long sp = sigsp(regs->sp, ksig);
/*
* No matter what happens, 'sp' must be dword
* aligned. Otherwise, nasty things will happen
*/
return (void __user *)((sp - framesize) & ~7);
}
static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
struct pt_regs *regs)
{
struct rt_sigframe __user *frame;
unsigned long __user *retcode;
int err = 0;
frame = get_sigframe(ksig, regs, sizeof(*frame));
if (!access_ok(frame, sizeof(*frame)))
return -EFAULT;
err |= __put_user(&frame->info, &frame->pinfo);
err |= __put_user(&frame->uc, &frame->puc);
err |= copy_siginfo_to_user(&frame->info, &ksig->info);
/* Clear all the bits of the ucontext we don't use. */
err |= __clear_user(&frame->uc, offsetof(struct ucontext, uc_mcontext));
err |= setup_sigcontext(&frame->uc.uc_mcontext, regs, set->sig[0]);
err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
/* Set up to return from userspace */
retcode = (unsigned long __user *) &frame->retcode;
/* The access_ok check was done above, so use __put_user here */
#define COPY(x) (err |= __put_user(x, retcode++))
COPY(0x0000002AUL | (__NR_rt_sigreturn << 7));
/* MVK __NR_rt_sigreturn,B0 */
COPY(0x10000000UL); /* SWE */
COPY(0x00006000UL); /* NOP 4 */
COPY(0x00006000UL); /* NOP 4 */
COPY(0x00006000UL); /* NOP 4 */
COPY(0x00006000UL); /* NOP 4 */
COPY(0x00006000UL); /* NOP 4 */
COPY(0x00006000UL); /* NOP 4 */
COPY(0x00006000UL); /* NOP 4 */
#undef COPY
if (err)
return -EFAULT;
flush_icache_range((unsigned long) &frame->retcode,
(unsigned long) &frame->retcode + RETCODE_SIZE);
retcode = (unsigned long __user *) &frame->retcode;
/* Change user context to branch to signal handler */
regs->sp = (unsigned long) frame - 8;
regs->b3 = (unsigned long) retcode;
regs->pc = (unsigned long) ksig->ka.sa.sa_handler;
/* Give the signal number to the handler */
regs->a4 = ksig->sig;
/*
* For realtime signals we must also set the second and third
* arguments for the signal handler.
* -- Peter Maydell <pmaydell@chiark.greenend.org.uk> 2000-12-06
*/
regs->b4 = (unsigned long)&frame->info;
regs->a6 = (unsigned long)&frame->uc;
return 0;
}
static inline void
handle_restart(struct pt_regs *regs, struct k_sigaction *ka, int has_handler)
{
switch (regs->a4) {
case -ERESTARTNOHAND:
if (!has_handler)
goto do_restart;
regs->a4 = -EINTR;
break;
case -ERESTARTSYS:
if (has_handler && !(ka->sa.sa_flags & SA_RESTART)) {
regs->a4 = -EINTR;
break;
}
fallthrough;
case -ERESTARTNOINTR:
do_restart:
regs->a4 = regs->orig_a4;
regs->pc -= 4;
break;
}
}
/*
* handle the actual delivery of a signal to userspace
*/
static void handle_signal(struct ksignal *ksig, struct pt_regs *regs,
int syscall)
{
int ret;
/* Are we from a system call? */
if (syscall) {
/* If so, check system call restarting.. */
switch (regs->a4) {
case -ERESTART_RESTARTBLOCK:
case -ERESTARTNOHAND:
regs->a4 = -EINTR;
break;
case -ERESTARTSYS:
if (!(ksig->ka.sa.sa_flags & SA_RESTART)) {
regs->a4 = -EINTR;
break;
}
fallthrough;
case -ERESTARTNOINTR:
regs->a4 = regs->orig_a4;
regs->pc -= 4;
}
}
/* Set up the stack frame */
ret = setup_rt_frame(ksig, sigmask_to_save(), regs);
signal_setup_done(ret, ksig, 0);
}
/*
* handle a potential signal
*/
static void do_signal(struct pt_regs *regs, int syscall)
{
struct ksignal ksig;
/* we want the common case to go fast, which is why we may in certain
* cases get here from kernel mode */
if (!user_mode(regs))
return;
if (get_signal(&ksig)) {
handle_signal(&ksig, regs, syscall);
return;
}
/* did we come from a system call? */
if (syscall) {
/* restart the system call - no handlers present */
switch (regs->a4) {
case -ERESTARTNOHAND:
case -ERESTARTSYS:
case -ERESTARTNOINTR:
regs->a4 = regs->orig_a4;
regs->pc -= 4;
break;
case -ERESTART_RESTARTBLOCK:
regs->a4 = regs->orig_a4;
regs->b0 = __NR_restart_syscall;
regs->pc -= 4;
break;
}
}
/* if there's no signal to deliver, we just put the saved sigmask
* back */
restore_saved_sigmask();
}
/*
* notification of userspace execution resumption
* - triggered by current->work.notify_resume
*/
asmlinkage void do_notify_resume(struct pt_regs *regs, u32 thread_info_flags,
int syscall)
{
/* deal with pending signal delivery */
if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
do_signal(regs, syscall);
if (thread_info_flags & (1 << TIF_NOTIFY_RESUME))
tracehook_notify_resume(regs);
}
// SPDX-License-Identifier: GPL-2.0-only
/*
* Miscellaneous SoC-specific hooks.
*
* Copyright (C) 2011 Texas Instruments Incorporated
* Author: Mark Salter <msalter@redhat.com>
*/
#include <linux/module.h>
#include <linux/ctype.h>
#include <linux/etherdevice.h>
#include <asm/setup.h>
#include <asm/soc.h>
struct soc_ops soc_ops;
int soc_get_exception(void)
{
if (!soc_ops.get_exception)
return -1;
return soc_ops.get_exception();
}
void soc_assert_event(unsigned int evt)
{
if (soc_ops.assert_event)
soc_ops.assert_event(evt);
}
static u8 cmdline_mac[6];
static int __init get_mac_addr_from_cmdline(char *str)
{
int count, i, val;
for (count = 0; count < 6 && *str; count++, str += 3) {
if (!isxdigit(str[0]) || !isxdigit(str[1]))
return 0;
if (str[2] != ((count < 5) ? ':' : '\0'))
return 0;
for (i = 0, val = 0; i < 2; i++) {
val = val << 4;
val |= isdigit(str[i]) ?
str[i] - '0' : toupper(str[i]) - 'A' + 10;
}
cmdline_mac[count] = val;
}
return 1;
}
__setup("emac_addr=", get_mac_addr_from_cmdline);
/*
* Setup the MAC address for SoC ethernet devices.
*
* Before calling this function, the ethernet driver will have
* initialized the addr with local-mac-address from the device
* tree (if found). Allow command line to override, but not
* the fused address.
*/
int soc_mac_addr(unsigned int index, u8 *addr)
{
int i, have_dt_mac = 0, have_cmdline_mac = 0, have_fuse_mac = 0;
for (i = 0; i < 6; i++) {
if (cmdline_mac[i])
have_cmdline_mac = 1;
if (c6x_fuse_mac[i])
have_fuse_mac = 1;
if (addr[i])
have_dt_mac = 1;
}
/* cmdline overrides all */
if (have_cmdline_mac)
memcpy(addr, cmdline_mac, 6);
else if (!have_dt_mac) {
if (have_fuse_mac)
memcpy(addr, c6x_fuse_mac, 6);
else
eth_random_addr(addr);
}
/* adjust for specific EMAC device */
addr[5] += index * c6x_num_cores;
return 1;
}
EXPORT_SYMBOL_GPL(soc_mac_addr);
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2011 Texas Instruments Incorporated
* Author: Mark Salter (msalter@redhat.com)
*/
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
#define SP B15
/*
* void __switch_to(struct thread_info *prev,
* struct thread_info *next,
* struct task_struct *tsk) ;
*/
ENTRY(__switch_to)
LDDW .D2T2 *+B4(THREAD_B15_14),B7:B6
|| MV .L2X A4,B5 ; prev
|| MV .L1X B4,A5 ; next
|| MVC .S2 RILC,B1
STW .D2T2 B3,*+B5(THREAD_PC)
|| STDW .D1T1 A13:A12,*+A4(THREAD_A13_12)
|| MVC .S2 ILC,B0
LDW .D2T2 *+B4(THREAD_PC),B3
|| LDDW .D1T1 *+A5(THREAD_A13_12),A13:A12
STDW .D1T1 A11:A10,*+A4(THREAD_A11_10)
|| STDW .D2T2 B1:B0,*+B5(THREAD_RICL_ICL)
#ifndef __DSBT__
|| MVKL .S2 current_ksp,B1
#endif
STDW .D2T2 B15:B14,*+B5(THREAD_B15_14)
|| STDW .D1T1 A15:A14,*+A4(THREAD_A15_14)
#ifndef __DSBT__
|| MVKH .S2 current_ksp,B1
#endif
;; Switch to next SP
MV .S2 B7,SP
#ifdef __DSBT__
|| STW .D2T2 B7,*+B14(current_ksp)
#else
|| STW .D2T2 B7,*B1
|| MV .L2 B6,B14
#endif
|| LDDW .D1T1 *+A5(THREAD_RICL_ICL),A1:A0
STDW .D2T2 B11:B10,*+B5(THREAD_B11_10)
|| LDDW .D1T1 *+A5(THREAD_A15_14),A15:A14
STDW .D2T2 B13:B12,*+B5(THREAD_B13_12)
|| LDDW .D1T1 *+A5(THREAD_A11_10),A11:A10
B .S2 B3 ; return in next E1
|| LDDW .D2T2 *+B4(THREAD_B13_12),B13:B12
LDDW .D2T2 *+B4(THREAD_B11_10),B11:B10
NOP
MV .L2X A0,B0
|| MV .S1 A6,A4
MVC .S2 B0,ILC
|| MV .L2X A1,B1
MVC .S2 B1,RILC
ENDPROC(__switch_to)
// SPDX-License-Identifier: GPL-2.0-only
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#include <linux/module.h>
#include <linux/syscalls.h>
#include <linux/uaccess.h>
#include <asm/syscalls.h>
#ifdef CONFIG_ACCESS_CHECK
int _access_ok(unsigned long addr, unsigned long size)
{
if (!size)
return 1;
if (!addr || addr > (0xffffffffUL - (size - 1)))
goto _bad_access;
if (uaccess_kernel())
return 1;
if (memory_start <= addr && (addr + size - 1) < memory_end)
return 1;
_bad_access:
pr_debug("Bad access attempt: pid[%d] addr[%08lx] size[0x%lx]\n",
current->pid, addr, size);
return 0;
}
EXPORT_SYMBOL(_access_ok);
#endif
/* sys_cache_sync -- sync caches over given range */
asmlinkage int sys_cache_sync(unsigned long s, unsigned long e)
{
L1D_cache_block_writeback_invalidate(s, e);
L1P_cache_block_invalidate(s, e);
return 0;
}
/* Provide the actual syscall number to call mapping. */
#undef __SYSCALL
#define __SYSCALL(nr, call) [nr] = (call),
/*
* Use trampolines
*/
#define sys_pread64 sys_pread_c6x
#define sys_pwrite64 sys_pwrite_c6x
#define sys_truncate64 sys_truncate64_c6x
#define sys_ftruncate64 sys_ftruncate64_c6x
#define sys_fadvise64 sys_fadvise64_c6x
#define sys_fadvise64_64 sys_fadvise64_64_c6x
#define sys_fallocate sys_fallocate_c6x
/* Use sys_mmap_pgoff directly */
#define sys_mmap2 sys_mmap_pgoff
/*
* Note that we can't include <linux/unistd.h> here since the header
* guard will defeat us; <asm/unistd.h> checks for __SYSCALL as well.
*/
void *sys_call_table[__NR_syscalls] = {
[0 ... __NR_syscalls-1] = sys_ni_syscall,
#include <asm/unistd.h>
};
// SPDX-License-Identifier: GPL-2.0-only
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#include <linux/kernel.h>
#include <linux/clocksource.h>
#include <linux/errno.h>
#include <linux/sched.h>
#include <linux/param.h>
#include <linux/string.h>
#include <linux/mm.h>
#include <linux/interrupt.h>
#include <linux/timex.h>
#include <linux/profile.h>
#include <asm/special_insns.h>
#include <asm/timer64.h>
static u32 sched_clock_multiplier;
#define SCHED_CLOCK_SHIFT 16
static u64 tsc_read(struct clocksource *cs)
{
return get_cycles();
}
static struct clocksource clocksource_tsc = {
.name = "timestamp",
.rating = 300,
.read = tsc_read,
.mask = CLOCKSOURCE_MASK(64),
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
};
/*
* scheduler clock - returns current time in nanoseconds.
*/
u64 sched_clock(void)
{
u64 tsc = get_cycles();
return (tsc * sched_clock_multiplier) >> SCHED_CLOCK_SHIFT;
}
void __init time_init(void)
{
u64 tmp = (u64)NSEC_PER_SEC << SCHED_CLOCK_SHIFT;
do_div(tmp, c6x_core_freq);
sched_clock_multiplier = tmp;
clocksource_register_hz(&clocksource_tsc, c6x_core_freq);
/* write anything into TSCL to enable counting */
set_creg(TSCL, 0);
/* probe for timer64 event timer */
timer64_init();
}
// SPDX-License-Identifier: GPL-2.0-only
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2006, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#include <linux/module.h>
#include <linux/ptrace.h>
#include <linux/sched/debug.h>
#include <linux/bug.h>
#include <asm/soc.h>
#include <asm/special_insns.h>
#include <asm/traps.h>
int (*c6x_nmi_handler)(struct pt_regs *regs);
void __init trap_init(void)
{
ack_exception(EXCEPT_TYPE_NXF);
ack_exception(EXCEPT_TYPE_EXC);
ack_exception(EXCEPT_TYPE_IXF);
ack_exception(EXCEPT_TYPE_SXF);
enable_exception();
}
void show_regs(struct pt_regs *regs)
{
pr_err("\n");
show_regs_print_info(KERN_ERR);
pr_err("PC: %08lx SP: %08lx\n", regs->pc, regs->sp);
pr_err("Status: %08lx ORIG_A4: %08lx\n", regs->csr, regs->orig_a4);
pr_err("A0: %08lx B0: %08lx\n", regs->a0, regs->b0);
pr_err("A1: %08lx B1: %08lx\n", regs->a1, regs->b1);
pr_err("A2: %08lx B2: %08lx\n", regs->a2, regs->b2);
pr_err("A3: %08lx B3: %08lx\n", regs->a3, regs->b3);
pr_err("A4: %08lx B4: %08lx\n", regs->a4, regs->b4);
pr_err("A5: %08lx B5: %08lx\n", regs->a5, regs->b5);
pr_err("A6: %08lx B6: %08lx\n", regs->a6, regs->b6);
pr_err("A7: %08lx B7: %08lx\n", regs->a7, regs->b7);
pr_err("A8: %08lx B8: %08lx\n", regs->a8, regs->b8);
pr_err("A9: %08lx B9: %08lx\n", regs->a9, regs->b9);
pr_err("A10: %08lx B10: %08lx\n", regs->a10, regs->b10);
pr_err("A11: %08lx B11: %08lx\n", regs->a11, regs->b11);
pr_err("A12: %08lx B12: %08lx\n", regs->a12, regs->b12);
pr_err("A13: %08lx B13: %08lx\n", regs->a13, regs->b13);
pr_err("A14: %08lx B14: %08lx\n", regs->a14, regs->dp);
pr_err("A15: %08lx B15: %08lx\n", regs->a15, regs->sp);
pr_err("A16: %08lx B16: %08lx\n", regs->a16, regs->b16);
pr_err("A17: %08lx B17: %08lx\n", regs->a17, regs->b17);
pr_err("A18: %08lx B18: %08lx\n", regs->a18, regs->b18);
pr_err("A19: %08lx B19: %08lx\n", regs->a19, regs->b19);
pr_err("A20: %08lx B20: %08lx\n", regs->a20, regs->b20);
pr_err("A21: %08lx B21: %08lx\n", regs->a21, regs->b21);
pr_err("A22: %08lx B22: %08lx\n", regs->a22, regs->b22);
pr_err("A23: %08lx B23: %08lx\n", regs->a23, regs->b23);
pr_err("A24: %08lx B24: %08lx\n", regs->a24, regs->b24);
pr_err("A25: %08lx B25: %08lx\n", regs->a25, regs->b25);
pr_err("A26: %08lx B26: %08lx\n", regs->a26, regs->b26);
pr_err("A27: %08lx B27: %08lx\n", regs->a27, regs->b27);
pr_err("A28: %08lx B28: %08lx\n", regs->a28, regs->b28);
pr_err("A29: %08lx B29: %08lx\n", regs->a29, regs->b29);
pr_err("A30: %08lx B30: %08lx\n", regs->a30, regs->b30);
pr_err("A31: %08lx B31: %08lx\n", regs->a31, regs->b31);
}
void die(char *str, struct pt_regs *fp, int nr)
{
console_verbose();
pr_err("%s: %08x\n", str, nr);
show_regs(fp);
pr_err("Process %s (pid: %d, stackpage=%08lx)\n",
current->comm, current->pid, (PAGE_SIZE +
(unsigned long) current));
dump_stack();
while (1)
;
}
static void die_if_kernel(char *str, struct pt_regs *fp, int nr)
{
if (user_mode(fp))
return;
die(str, fp, nr);
}
/* Internal exceptions */
static struct exception_info iexcept_table[10] = {
{ "Oops - instruction fetch", SIGBUS, BUS_ADRERR },
{ "Oops - fetch packet", SIGBUS, BUS_ADRERR },
{ "Oops - execute packet", SIGILL, ILL_ILLOPC },
{ "Oops - undefined instruction", SIGILL, ILL_ILLOPC },
{ "Oops - resource conflict", SIGILL, ILL_ILLOPC },
{ "Oops - resource access", SIGILL, ILL_PRVREG },
{ "Oops - privilege", SIGILL, ILL_PRVOPC },
{ "Oops - loops buffer", SIGILL, ILL_ILLOPC },
{ "Oops - software exception", SIGILL, ILL_ILLTRP },
{ "Oops - unknown exception", SIGILL, ILL_ILLOPC }
};
/* External exceptions */
static struct exception_info eexcept_table[128] = {
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - external exception", SIGBUS, BUS_ADRERR },
{ "Oops - CPU memory protection fault", SIGSEGV, SEGV_ACCERR },
{ "Oops - CPU memory protection fault in L1P", SIGSEGV, SEGV_ACCERR },
{ "Oops - DMA memory protection fault in L1P", SIGSEGV, SEGV_ACCERR },
{ "Oops - CPU memory protection fault in L1D", SIGSEGV, SEGV_ACCERR },
{ "Oops - DMA memory protection fault in L1D", SIGSEGV, SEGV_ACCERR },
{ "Oops - CPU memory protection fault in L2", SIGSEGV, SEGV_ACCERR },
{ "Oops - DMA memory protection fault in L2", SIGSEGV, SEGV_ACCERR },
{ "Oops - EMC CPU memory protection fault", SIGSEGV, SEGV_ACCERR },
{ "Oops - EMC bus error", SIGBUS, BUS_ADRERR }
};
static void do_trap(struct exception_info *except_info, struct pt_regs *regs)
{
unsigned long addr = instruction_pointer(regs);
if (except_info->code != TRAP_BRKPT)
pr_err("TRAP: %s PC[0x%lx] signo[%d] code[%d]\n",
except_info->kernel_str, regs->pc,
except_info->signo, except_info->code);
die_if_kernel(except_info->kernel_str, regs, addr);
force_sig_fault(except_info->signo, except_info->code,
(void __user *)addr);
}
/*
* Process an internal exception (non maskable)
*/
static int process_iexcept(struct pt_regs *regs)
{
unsigned int iexcept_report = get_iexcept();
unsigned int iexcept_num;
ack_exception(EXCEPT_TYPE_IXF);
pr_err("IEXCEPT: PC[0x%lx]\n", regs->pc);
while (iexcept_report) {
iexcept_num = __ffs(iexcept_report);
iexcept_report &= ~(1 << iexcept_num);
set_iexcept(iexcept_report);
if (*(unsigned int *)regs->pc == BKPT_OPCODE) {
/* This is a breakpoint */
struct exception_info bkpt_exception = {
"Oops - undefined instruction",
SIGTRAP, TRAP_BRKPT
};
do_trap(&bkpt_exception, regs);
iexcept_report &= ~(0xFF);
set_iexcept(iexcept_report);
continue;
}
do_trap(&iexcept_table[iexcept_num], regs);
}
return 0;
}
/*
* Process an external exception (maskable)
*/
static void process_eexcept(struct pt_regs *regs)
{
int evt;
pr_err("EEXCEPT: PC[0x%lx]\n", regs->pc);
while ((evt = soc_get_exception()) >= 0)
do_trap(&eexcept_table[evt], regs);
ack_exception(EXCEPT_TYPE_EXC);
}
/*
* Main exception processing
*/
asmlinkage int process_exception(struct pt_regs *regs)
{
unsigned int type;
unsigned int type_num;
unsigned int ie_num = 9; /* default is unknown exception */
while ((type = get_except_type()) != 0) {
type_num = fls(type) - 1;
switch (type_num) {
case EXCEPT_TYPE_NXF:
ack_exception(EXCEPT_TYPE_NXF);
if (c6x_nmi_handler)
(c6x_nmi_handler)(regs);
else
pr_alert("NMI interrupt!\n");
break;
case EXCEPT_TYPE_IXF:
if (process_iexcept(regs))
return 1;
break;
case EXCEPT_TYPE_EXC:
process_eexcept(regs);
break;
case EXCEPT_TYPE_SXF:
ie_num = 8;
default:
ack_exception(type_num);
do_trap(&iexcept_table[ie_num], regs);
break;
}
}
return 0;
}
static int kstack_depth_to_print = 48;
static void show_trace(unsigned long *stack, unsigned long *endstack,
const char *loglvl)
{
unsigned long addr;
int i;
printk("%sCall trace:", loglvl);
i = 0;
while (stack + 1 <= endstack) {
addr = *stack++;
/*
* If the address is either in the text segment of the
* kernel, or in the region which contains vmalloc'ed
* memory, it *may* be the address of a calling
* routine; if so, print it so that someone tracing
* down the cause of the crash will be able to figure
* out the call path that was taken.
*/
if (__kernel_text_address(addr)) {
#ifndef CONFIG_KALLSYMS
if (i % 5 == 0)
printk("%s\n ", loglvl);
#endif
printk("%s [<%08lx>] %pS\n", loglvl, addr, (void *)addr);
i++;
}
}
printk("%s\n", loglvl);
}
void show_stack(struct task_struct *task, unsigned long *stack,
const char *loglvl)
{
unsigned long *p, *endstack;
int i;
if (!stack) {
if (task && task != current)
/* We know this is a kernel stack,
so this is the start/end */
stack = (unsigned long *)thread_saved_ksp(task);
else
stack = (unsigned long *)&stack;
}
endstack = (unsigned long *)(((unsigned long)stack + THREAD_SIZE - 1)
& -THREAD_SIZE);
pr_debug("Stack from %08lx:", (unsigned long)stack);
for (i = 0, p = stack; i < kstack_depth_to_print; i++) {
if (p + 1 > endstack)
break;
if (i % 8 == 0)
pr_cont("\n ");
pr_cont(" %08lx", *p++);
}
pr_cont("\n");
show_trace(stack, endstack, loglvl);
}
int is_valid_bugaddr(unsigned long addr)
{
return __kernel_text_address(addr);
}
; SPDX-License-Identifier: GPL-2.0-only
;
; Port on Texas Instruments TMS320C6x architecture
;
; Copyright (C) 2004, 2006, 2009, 2010, 2011 Texas Instruments Incorporated
; Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
;
; This section handles all the interrupt vector routines.
; At RESET the processor sets up the DRAM timing parameters and
; branches to the label _c_int00 which handles initialization for the C code.
;
#define ALIGNMENT 5
.macro IRQVEC name, handler
.align ALIGNMENT
.hidden \name
.global \name
\name:
#ifdef CONFIG_C6X_BIG_KERNEL
STW .D2T1 A0,*B15--[2]
|| MVKL .S1 \handler,A0
MVKH .S1 \handler,A0
B .S2X A0
LDW .D2T1 *++B15[2],A0
NOP 4
NOP
NOP
.endm
#else /* CONFIG_C6X_BIG_KERNEL */
B .S2 \handler
NOP
NOP
NOP
NOP
NOP
NOP
NOP
.endm
#endif /* CONFIG_C6X_BIG_KERNEL */
.sect ".vectors","ax"
.align ALIGNMENT
.global RESET
.hidden RESET
RESET:
#ifdef CONFIG_C6X_BIG_KERNEL
MVKL .S1 _c_int00,A0 ; branch to _c_int00
MVKH .S1 _c_int00,A0
B .S2X A0
#else
B .S2 _c_int00
NOP
NOP
#endif
NOP
NOP
NOP
NOP
NOP
IRQVEC NMI,_nmi_handler ; NMI interrupt
IRQVEC AINT,_bad_interrupt ; reserved
IRQVEC MSGINT,_bad_interrupt ; reserved
IRQVEC INT4,_int4_handler
IRQVEC INT5,_int5_handler
IRQVEC INT6,_int6_handler
IRQVEC INT7,_int7_handler
IRQVEC INT8,_int8_handler
IRQVEC INT9,_int9_handler
IRQVEC INT10,_int10_handler
IRQVEC INT11,_int11_handler
IRQVEC INT12,_int12_handler
IRQVEC INT13,_int13_handler
IRQVEC INT14,_int14_handler
IRQVEC INT15,_int15_handler
/* SPDX-License-Identifier: GPL-2.0 */
/*
* ld script for the c6x kernel
*
* Copyright (C) 2010, 2011 Texas Instruments Incorporated
* Mark Salter <msalter@redhat.com>
*/
#define RO_EXCEPTION_TABLE_ALIGN 16
#include <asm-generic/vmlinux.lds.h>
#include <asm/thread_info.h>
#include <asm/page.h>
ENTRY(_c_int00)
#if defined(CONFIG_CPU_BIG_ENDIAN)
jiffies = jiffies_64 + 4;
#else
jiffies = jiffies_64;
#endif
#define READONLY_SEGMENT_START \
. = PAGE_OFFSET;
#define READWRITE_SEGMENT_START \
. = ALIGN(128); \
_data_lma = .;
SECTIONS
{
/*
* Start kernel read only segment
*/
READONLY_SEGMENT_START
.vectors :
{
_vectors_start = .;
*(.vectors)
. = ALIGN(0x400);
_vectors_end = .;
}
/*
* This section contains data which may be shared with other
* cores. It needs to be a fixed offset from PAGE_OFFSET
* regardless of kernel configuration.
*/
.virtio_ipc_dev :
{
*(.virtio_ipc_dev)
}
. = ALIGN(PAGE_SIZE);
__init_begin = .;
.init :
{
_sinittext = .;
HEAD_TEXT
INIT_TEXT
_einittext = .;
}
INIT_DATA_SECTION(16)
PERCPU_SECTION(128)
. = ALIGN(PAGE_SIZE);
__init_end = .;
.text :
{
_text = .;
_stext = .;
TEXT_TEXT
SCHED_TEXT
CPUIDLE_TEXT
LOCK_TEXT
IRQENTRY_TEXT
SOFTIRQENTRY_TEXT
KPROBES_TEXT
*(.fixup)
*(.gnu.warning)
}
RO_DATA(PAGE_SIZE)
.const :
{
*(.const .const.* .gnu.linkonce.r.*)
*(.switch)
}
_etext = .;
/*
* Start kernel read-write segment.
*/
READWRITE_SEGMENT_START
_sdata = .;
.fardata : AT(ADDR(.fardata) - LOAD_OFFSET)
{
INIT_TASK_DATA(THREAD_SIZE)
NOSAVE_DATA
PAGE_ALIGNED_DATA(PAGE_SIZE)
CACHELINE_ALIGNED_DATA(128)
READ_MOSTLY_DATA(128)
DATA_DATA
CONSTRUCTORS
*(.data1)
*(.fardata .fardata.*)
*(.data.debug_bpt)
}
.neardata ALIGN(8) : AT(ADDR(.neardata) - LOAD_OFFSET)
{
*(.neardata2 .neardata2.* .gnu.linkonce.s2.*)
*(.neardata .neardata.* .gnu.linkonce.s.*)
. = ALIGN(8);
}
BUG_TABLE
_edata = .;
__bss_start = .;
SBSS(8)
BSS(8)
.far :
{
. = ALIGN(8);
*(.dynfar)
*(.far .far.* .gnu.linkonce.b.*)
. = ALIGN(8);
}
__bss_stop = .;
_end = .;
DWARF_DEBUG
/DISCARD/ :
{
EXIT_TEXT
EXIT_DATA
EXIT_CALL
*(.discard)
*(.discard.*)
*(.interp)
}
}
# SPDX-License-Identifier: GPL-2.0-only
#
# Makefile for arch/c6x/lib/
#
lib-y := divu.o divi.o pop_rts.o push_rts.o remi.o remu.o strasgi.o llshru.o
lib-y += llshr.o llshl.o negll.o mpyll.o divremi.o divremu.o
lib-y += checksum.o csum_64plus.o memcpy_64plus.o strasgi_64plus.o
// SPDX-License-Identifier: GPL-2.0-or-later
/*
*/
#include <linux/module.h>
#include <net/checksum.h>
/* These are from csum_64plus.S */
EXPORT_SYMBOL(csum_partial);
EXPORT_SYMBOL(csum_partial_copy_nocheck);
EXPORT_SYMBOL(ip_compute_csum);
EXPORT_SYMBOL(ip_fast_csum);
; SPDX-License-Identifier: GPL-2.0-only
;
; linux/arch/c6x/lib/csum_64plus.s
;
; Port on Texas Instruments TMS320C6x architecture
;
; Copyright (C) 2006, 2009, 2010, 2011 Texas Instruments Incorporated
; Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
;
#include <linux/linkage.h>
;
;unsigned int csum_partial_copy_nocheck(const char *src, char * dst,
; int len, int sum)
;
; A4: src
; B4: dst
; A6: len
; B6: sum
; return csum in A4
;
.text
ENTRY(csum_partial_copy_nocheck)
MVC .S2 ILC,B30
ZERO .D1 A9 ; csum (a side)
|| ZERO .D2 B9 ; csum (b side)
|| SHRU .S2X A6,2,B5 ; len / 4
;; Check alignment and size
AND .S1 3,A4,A1
|| AND .S2 3,B4,B0
OR .L2X B0,A1,B0 ; non aligned condition
|| MVC .S2 B5,ILC
|| MVK .D2 1,B2
|| MV .D1X B5,A1 ; words condition
[!A1] B .S1 L8
[B0] BNOP .S1 L6,5
SPLOOP 1
;; Main loop for aligned words
LDW .D1T1 *A4++,A7
NOP 4
MV .S2X A7,B7
|| EXTU .S1 A7,0,16,A16
STW .D2T2 B7,*B4++
|| MPYU .M2 B7,B2,B8
|| ADD .L1 A16,A9,A9
NOP
SPKERNEL 8,0
|| ADD .L2 B8,B9,B9
ZERO .D1 A1
|| ADD .L1X A9,B9,A9 ; add csum from a and b sides
L6:
[!A1] BNOP .S1 L8,5
;; Main loop for non-aligned words
SPLOOP 2
|| MVK .L1 1,A2
LDNW .D1T1 *A4++,A7
NOP 3
NOP
MV .S2X A7,B7
|| EXTU .S1 A7,0,16,A16
|| MPYU .M1 A7,A2,A8
ADD .L1 A16,A9,A9
SPKERNEL 6,0
|| STNW .D2T2 B7,*B4++
|| ADD .L1 A8,A9,A9
L8: AND .S2X 2,A6,B5
CMPGT .L2 B5,0,B0
[!B0] BNOP .S1 L82,4
;; Manage half-word
ZERO .L1 A7
|| ZERO .D1 A8
#ifdef CONFIG_CPU_BIG_ENDIAN
LDBU .D1T1 *A4++,A7
LDBU .D1T1 *A4++,A8
NOP 3
SHL .S1 A7,8,A0
ADD .S1 A8,A9,A9
STB .D2T1 A7,*B4++
|| ADD .S1 A0,A9,A9
STB .D2T1 A8,*B4++
#else
LDBU .D1T1 *A4++,A7
LDBU .D1T1 *A4++,A8
NOP 3
ADD .S1 A7,A9,A9
SHL .S1 A8,8,A0
STB .D2T1 A7,*B4++
|| ADD .S1 A0,A9,A9
STB .D2T1 A8,*B4++
#endif
;; Manage eventually the last byte
L82: AND .S2X 1,A6,B0
[!B0] BNOP .S1 L9,5
|| ZERO .L1 A7
L83: LDBU .D1T1 *A4++,A7
NOP 4
MV .L2X A7,B7
#ifdef CONFIG_CPU_BIG_ENDIAN
STB .D2T2 B7,*B4++
|| SHL .S1 A7,8,A7
ADD .S1 A7,A9,A9
#else
STB .D2T2 B7,*B4++
|| ADD .S1 A7,A9,A9
#endif
;; Fold the csum
L9: SHRU .S2X A9,16,B0
[!B0] BNOP .S1 L10,5
L91: SHRU .S2X A9,16,B4
|| EXTU .S1 A9,16,16,A3
ADD .D1X A3,B4,A9
SHRU .S1 A9,16,A0
[A0] BNOP .S1 L91,5
L10: MV .D1 A9,A4
BNOP .S2 B3,4
MVC .S2 B30,ILC
ENDPROC(csum_partial_copy_nocheck)
;
;unsigned short
;ip_fast_csum(unsigned char *iph, unsigned int ihl)
;{
; unsigned int checksum = 0;
; unsigned short *tosum = (unsigned short *) iph;
; int len;
;
; len = ihl*4;
;
; if (len <= 0)
; return 0;
;
; while(len) {
; len -= 2;
; checksum += *tosum++;
; }
; if (len & 1)
; checksum += *(unsigned char*) tosum;
;
; while(checksum >> 16)
; checksum = (checksum & 0xffff) + (checksum >> 16);
;
; return ~checksum;
;}
;
; A4: iph
; B4: ihl
; return checksum in A4
;
.text
ENTRY(ip_fast_csum)
ZERO .D1 A5
|| MVC .S2 ILC,B30
SHL .S2 B4,2,B0
CMPGT .L2 B0,0,B1
[!B1] BNOP .S1 L15,4
[!B1] ZERO .D1 A3
[!B0] B .S1 L12
SHRU .S2 B0,1,B0
MVC .S2 B0,ILC
NOP 3
SPLOOP 1
LDHU .D1T1 *A4++,A3
NOP 3
NOP
SPKERNEL 5,0
|| ADD .L1 A3,A5,A5
L12: SHRU .S1 A5,16,A0
[!A0] BNOP .S1 L14,5
L13: SHRU .S2X A5,16,B4
EXTU .S1 A5,16,16,A3
ADD .D1X A3,B4,A5
SHRU .S1 A5,16,A0
[A0] BNOP .S1 L13,5
L14: NOT .D1 A5,A3
EXTU .S1 A3,16,16,A3
L15: BNOP .S2 B3,3
MVC .S2 B30,ILC
MV .D1 A3,A4
ENDPROC(ip_fast_csum)
;
;unsigned short
;do_csum(unsigned char *buff, unsigned int len)
;{
; int odd, count;
; unsigned int result = 0;
;
; if (len <= 0)
; goto out;
; odd = 1 & (unsigned long) buff;
; if (odd) {
;#ifdef __LITTLE_ENDIAN
; result += (*buff << 8);
;#else
; result = *buff;
;#endif
; len--;
; buff++;
; }
; count = len >> 1; /* nr of 16-bit words.. */
; if (count) {
; if (2 & (unsigned long) buff) {
; result += *(unsigned short *) buff;
; count--;
; len -= 2;
; buff += 2;
; }
; count >>= 1; /* nr of 32-bit words.. */
; if (count) {
; unsigned int carry = 0;
; do {
; unsigned int w = *(unsigned int *) buff;
; count--;
; buff += 4;
; result += carry;
; result += w;
; carry = (w > result);
; } while (count);
; result += carry;
; result = (result & 0xffff) + (result >> 16);
; }
; if (len & 2) {
; result += *(unsigned short *) buff;
; buff += 2;
; }
; }
; if (len & 1)
;#ifdef __LITTLE_ENDIAN
; result += *buff;
;#else
; result += (*buff << 8);
;#endif
; result = (result & 0xffff) + (result >> 16);
; /* add up carry.. */
; result = (result & 0xffff) + (result >> 16);
; if (odd)
; result = ((result >> 8) & 0xff) | ((result & 0xff) << 8);
;out:
; return result;
;}
;
; A4: buff
; B4: len
; return checksum in A4
;
ENTRY(do_csum)
CMPGT .L2 B4,0,B0
[!B0] BNOP .S1 L26,3
EXTU .S1 A4,31,31,A0
MV .L1 A0,A3
|| MV .S1X B3,A5
|| MV .L2 B4,B3
|| ZERO .D1 A1
#ifdef CONFIG_CPU_BIG_ENDIAN
[A0] SUB .L2 B3,1,B3
|| [A0] LDBU .D1T1 *A4++,A1
#else
[!A0] BNOP .S1 L21,5
|| [A0] LDBU .D1T1 *A4++,A0
SUB .L2 B3,1,B3
|| SHL .S1 A0,8,A1
L21:
#endif
SHR .S2 B3,1,B0
[!B0] BNOP .S1 L24,3
MVK .L1 2,A0
AND .L1 A4,A0,A0
[!A0] BNOP .S1 L22,5
|| [A0] LDHU .D1T1 *A4++,A0
SUB .L2 B0,1,B0
|| SUB .S2 B3,2,B3
|| ADD .L1 A0,A1,A1
L22:
SHR .S2 B0,1,B0
|| ZERO .L1 A0
[!B0] BNOP .S1 L23,5
|| [B0] MVC .S2 B0,ILC
SPLOOP 3
SPMASK L1
|| MV .L1 A1,A2
|| LDW .D1T1 *A4++,A1
NOP 4
ADD .L1 A0,A1,A0
ADD .L1 A2,A0,A2
SPKERNEL 1,2
|| CMPGTU .L1 A1,A2,A0
ADD .L1 A0,A2,A6
EXTU .S1 A6,16,16,A7
SHRU .S2X A6,16,B0
NOP 1
ADD .L1X A7,B0,A1
L23:
MVK .L2 2,B0
AND .L2 B3,B0,B0
[B0] LDHU .D1T1 *A4++,A0
NOP 4
[B0] ADD .L1 A0,A1,A1
L24:
EXTU .S2 B3,31,31,B0
#ifdef CONFIG_CPU_BIG_ENDIAN
[!B0] BNOP .S1 L25,4
|| [B0] LDBU .D1T1 *A4,A0
SHL .S1 A0,8,A0
ADD .L1 A0,A1,A1
L25:
#else
[B0] LDBU .D1T1 *A4,A0
NOP 4
[B0] ADD .L1 A0,A1,A1
#endif
EXTU .S1 A1,16,16,A0
SHRU .S2X A1,16,B0
NOP 1
ADD .L1X A0,B0,A0
SHRU .S1 A0,16,A1
ADD .L1 A0,A1,A0
EXTU .S1 A0,16,16,A1
EXTU .S1 A1,16,24,A2
EXTU .S1 A1,24,16,A0
|| MV .L2X A3,B0
[B0] OR .L1 A0,A2,A1
L26:
NOP 1
BNOP .S2X A5,4
MV .L1 A1,A4
ENDPROC(do_csum)
;__wsum csum_partial(const void *buff, int len, __wsum wsum)
;{
; unsigned int sum = (__force unsigned int)wsum;
; unsigned int result = do_csum(buff, len);
;
; /* add in old sum, and carry.. */
; result += sum;
; if (sum > result)
; result += 1;
; return (__force __wsum)result;
;}
;
ENTRY(csum_partial)
MV .L1X B3,A9
|| CALLP .S2 do_csum,B3
|| MV .S1 A6,A8
BNOP .S2X A9,2
ADD .L1 A8,A4,A1
CMPGTU .L1 A8,A1,A0
ADD .L1 A1,A0,A4
ENDPROC(csum_partial)
;unsigned short
;ip_compute_csum(unsigned char *buff, unsigned int len)
;
; A4: buff
; B4: len
; return checksum in A4
ENTRY(ip_compute_csum)
MV .L1X B3,A9
|| CALLP .S2 do_csum,B3
BNOP .S2X A9,3
NOT .S1 A4,A4
CLR .S1 A4,16,31,A4
ENDPROC(ip_compute_csum)
;; SPDX-License-Identifier: GPL-2.0-or-later
;; Copyright 2010 Free Software Foundation, Inc.
;; Contributed by Bernd Schmidt <bernds@codesourcery.com>.
;;
#include <linux/linkage.h>
;; ABI considerations for the divide functions
;; The following registers are call-used:
;; __c6xabi_divi A0,A1,A2,A4,A6,B0,B1,B2,B4,B5
;; __c6xabi_divu A0,A1,A2,A4,A6,B0,B1,B2,B4
;; __c6xabi_remi A1,A2,A4,A5,A6,B0,B1,B2,B4
;; __c6xabi_remu A1,A4,A5,A7,B0,B1,B2,B4
;;
;; In our implementation, divu and remu are leaf functions,
;; while both divi and remi call into divu.
;; A0 is not clobbered by any of the functions.
;; divu does not clobber B2 either, which is taken advantage of
;; in remi.
;; divi uses B5 to hold the original return address during
;; the call to divu.
;; remi uses B2 and A5 to hold the input values during the
;; call to divu. It stores B3 in on the stack.
.text
ENTRY(__c6xabi_divi)
call .s2 __c6xabi_divu
|| mv .d2 B3, B5
|| cmpgt .l1 0, A4, A1
|| cmpgt .l2 0, B4, B1
[A1] neg .l1 A4, A4
|| [B1] neg .l2 B4, B4
|| xor .s1x A1, B1, A1
[A1] addkpc .s2 _divu_ret, B3, 4
_divu_ret:
neg .l1 A4, A4
|| mv .l2 B3,B5
|| ret .s2 B5
nop 5
ENDPROC(__c6xabi_divi)
;; SPDX-License-Identifier: GPL-2.0-or-later
;; Copyright 2010 Free Software Foundation, Inc.
;; Contributed by Bernd Schmidt <bernds@codesourcery.com>.
;;
#include <linux/linkage.h>
.text
ENTRY(__c6xabi_divremi)
stw .d2t2 B3, *B15--[2]
|| cmpgt .l1 0, A4, A1
|| cmpgt .l2 0, B4, B2
|| mv .s1 A4, A5
|| call .s2 __c6xabi_divu
[A1] neg .l1 A4, A4
|| [B2] neg .l2 B4, B4
|| xor .s2x B2, A1, B0
|| mv .d2 B4, B2
[B0] addkpc .s2 _divu_ret_1, B3, 1
[!B0] addkpc .s2 _divu_ret_2, B3, 1
nop 2
_divu_ret_1:
neg .l1 A4, A4
_divu_ret_2:
ldw .d2t2 *++B15[2], B3
mpy32 .m1x A4, B2, A6
nop 3
ret .s2 B3
sub .l1 A5, A6, A5
nop 4
ENDPROC(__c6xabi_divremi)
;; SPDX-License-Identifier: GPL-2.0-or-later
;; Copyright 2011 Free Software Foundation, Inc.
;; Contributed by Bernd Schmidt <bernds@codesourcery.com>.
;;
#include <linux/linkage.h>
.text
ENTRY(__c6xabi_divremu)
;; We use a series of up to 31 subc instructions. First, we find
;; out how many leading zero bits there are in the divisor. This
;; gives us both a shift count for aligning (shifting) the divisor
;; to the, and the number of times we have to execute subc.
;; At the end, we have both the remainder and most of the quotient
;; in A4. The top bit of the quotient is computed first and is
;; placed in A2.
;; Return immediately if the dividend is zero. Setting B4 to 1
;; is a trick to allow us to leave the following insns in the jump
;; delay slot without affecting the result.
mv .s2x A4, B1
[b1] lmbd .l2 1, B4, B1
||[!b1] b .s2 B3 ; RETURN A
||[!b1] mvk .d2 1, B4
||[!b1] zero .s1 A5
mv .l1x B1, A6
|| shl .s2 B4, B1, B4
;; The loop performs a maximum of 28 steps, so we do the
;; first 3 here.
cmpltu .l1x A4, B4, A2
[!A2] sub .l1x A4, B4, A4
|| shru .s2 B4, 1, B4
|| xor .s1 1, A2, A2
shl .s1 A2, 31, A2
|| [b1] subc .l1x A4,B4,A4
|| [b1] add .s2 -1, B1, B1
[b1] subc .l1x A4,B4,A4
|| [b1] add .s2 -1, B1, B1
;; RETURN A may happen here (note: must happen before the next branch)
__divremu0:
cmpgt .l2 B1, 7, B0
|| [b1] subc .l1x A4,B4,A4
|| [b1] add .s2 -1, B1, B1
[b1] subc .l1x A4,B4,A4
|| [b1] add .s2 -1, B1, B1
|| [b0] b .s1 __divremu0
[b1] subc .l1x A4,B4,A4
|| [b1] add .s2 -1, B1, B1
[b1] subc .l1x A4,B4,A4
|| [b1] add .s2 -1, B1, B1
[b1] subc .l1x A4,B4,A4
|| [b1] add .s2 -1, B1, B1
[b1] subc .l1x A4,B4,A4
|| [b1] add .s2 -1, B1, B1
[b1] subc .l1x A4,B4,A4
|| [b1] add .s2 -1, B1, B1
;; loop backwards branch happens here
ret .s2 B3
|| mvk .s1 32, A1
sub .l1 A1, A6, A6
|| extu .s1 A4, A6, A5
shl .s1 A4, A6, A4
shru .s1 A4, 1, A4
|| sub .l1 A6, 1, A6
or .l1 A2, A4, A4
shru .s1 A4, A6, A4
nop
ENDPROC(__c6xabi_divremu)
;; SPDX-License-Identifier: GPL-2.0-or-later
;; Copyright 2010 Free Software Foundation, Inc.
;; Contributed by Bernd Schmidt <bernds@codesourcery.com>.
;;
#include <linux/linkage.h>
;; ABI considerations for the divide functions
;; The following registers are call-used:
;; __c6xabi_divi A0,A1,A2,A4,A6,B0,B1,B2,B4,B5
;; __c6xabi_divu A0,A1,A2,A4,A6,B0,B1,B2,B4
;; __c6xabi_remi A1,A2,A4,A5,A6,B0,B1,B2,B4
;; __c6xabi_remu A1,A4,A5,A7,B0,B1,B2,B4
;;
;; In our implementation, divu and remu are leaf functions,
;; while both divi and remi call into divu.
;; A0 is not clobbered by any of the functions.
;; divu does not clobber B2 either, which is taken advantage of
;; in remi.
;; divi uses B5 to hold the original return address during
;; the call to divu.
;; remi uses B2 and A5 to hold the input values during the
;; call to divu. It stores B3 in on the stack.
.text
ENTRY(__c6xabi_divu)
;; We use a series of up to 31 subc instructions. First, we find
;; out how many leading zero bits there are in the divisor. This
;; gives us both a shift count for aligning (shifting) the divisor
;; to the, and the number of times we have to execute subc.
;; At the end, we have both the remainder and most of the quotient
;; in A4. The top bit of the quotient is computed first and is
;; placed in A2.
;; Return immediately if the dividend is zero.
mv .s2x A4, B1
[B1] lmbd .l2 1, B4, B1
|| [!B1] b .s2 B3 ; RETURN A
|| [!B1] mvk .d2 1, B4
mv .l1x B1, A6
|| shl .s2 B4, B1, B4
;; The loop performs a maximum of 28 steps, so we do the
;; first 3 here.
cmpltu .l1x A4, B4, A2
[!A2] sub .l1x A4, B4, A4
|| shru .s2 B4, 1, B4
|| xor .s1 1, A2, A2
shl .s1 A2, 31, A2
|| [B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
[B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
;; RETURN A may happen here (note: must happen before the next branch)
_divu_loop:
cmpgt .l2 B1, 7, B0
|| [B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
[B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
|| [B0] b .s1 _divu_loop
[B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
[B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
[B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
[B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
[B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
;; loop backwards branch happens here
ret .s2 B3
|| mvk .s1 32, A1
sub .l1 A1, A6, A6
shl .s1 A4, A6, A4
shru .s1 A4, 1, A4
|| sub .l1 A6, 1, A6
or .l1 A2, A4, A4
shru .s1 A4, A6, A4
nop
ENDPROC(__c6xabi_divu)
;; SPDX-License-Identifier: GPL-2.0-or-later
;; Copyright (C) 2010 Texas Instruments Incorporated
;; Contributed by Mark Salter <msalter@redhat.com>.
;;
;; uint64_t __c6xabi_llshl(uint64_t val, uint shift)
#include <linux/linkage.h>
.text
ENTRY(__c6xabi_llshl)
mv .l1x B4,A1
[!A1] b .s2 B3 ; just return if zero shift
mvk .s1 32,A0
sub .d1 A0,A1,A0
cmplt .l1 0,A0,A2
[A2] shru .s1 A4,A0,A0
[!A2] neg .l1 A0,A5
|| [A2] shl .s1 A5,A1,A5
[!A2] shl .s1 A4,A5,A5
|| [A2] or .d1 A5,A0,A5
|| [!A2] mvk .l1 0,A4
[A2] shl .s1 A4,A1,A4
bnop .s2 B3,5
ENDPROC(__c6xabi_llshl)
;; SPDX-License-Identifier: GPL-2.0-or-later
;; Copyright (C) 2010 Texas Instruments Incorporated
;; Contributed by Mark Salter <msalter@redhat.com>.
;;
;; uint64_t __c6xabi_llshr(uint64_t val, uint shift)
#include <linux/linkage.h>
.text
ENTRY(__c6xabi_llshr)
mv .l1x B4,A1
[!A1] b .s2 B3 ; return if zero shift count
mvk .s1 32,A0
sub .d1 A0,A1,A0
cmplt .l1 0,A0,A2
[A2] shl .s1 A5,A0,A0
nop
[!A2] neg .l1 A0,A4
|| [A2] shru .s1 A4,A1,A4
[!A2] shr .s1 A5,A4,A4
|| [A2] or .d1 A4,A0,A4
[!A2] shr .s1 A5,0x1f,A5
[A2] shr .s1 A5,A1,A5
bnop .s2 B3,5
ENDPROC(__c6xabi_llshr)
;; SPDX-License-Identifier: GPL-2.0-or-later
;; Copyright (C) 2010 Texas Instruments Incorporated
;; Contributed by Mark Salter <msalter@redhat.com>.
;;
;; uint64_t __c6xabi_llshru(uint64_t val, uint shift)
#include <linux/linkage.h>
.text
ENTRY(__c6xabi_llshru)
mv .l1x B4,A1
[!A1] b .s2 B3 ; return if zero shift count
mvk .s1 32,A0
sub .d1 A0,A1,A0
cmplt .l1 0,A0,A2
[A2] shl .s1 A5,A0,A0
nop
[!A2] neg .l1 A0,A4
|| [A2] shru .s1 A4,A1,A4
[!A2] shru .s1 A5,A4,A4
|| [A2] or .d1 A4,A0,A4
|| [!A2] mvk .l1 0,A5
[A2] shru .s1 A5,A1,A5
bnop .s2 B3,5
ENDPROC(__c6xabi_llshru)
; SPDX-License-Identifier: GPL-2.0-only
; Port on Texas Instruments TMS320C6x architecture
;
; Copyright (C) 2006, 2009, 2010 Texas Instruments Incorporated
; Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
;
#include <linux/linkage.h>
.text
ENTRY(memcpy)
AND .L1 0x1,A6,A0
|| AND .S1 0x2,A6,A1
|| AND .L2X 0x4,A6,B0
|| MV .D1 A4,A3
|| MVC .S2 ILC,B2
[A0] LDB .D2T1 *B4++,A5
[A1] LDB .D2T1 *B4++,A7
[A1] LDB .D2T1 *B4++,A8
[B0] LDNW .D2T1 *B4++,A9
|| SHRU .S2X A6,0x3,B1
[!B1] BNOP .S2 B3,1
[A0] STB .D1T1 A5,*A3++
||[B1] MVC .S2 B1,ILC
[A1] STB .D1T1 A7,*A3++
[A1] STB .D1T1 A8,*A3++
[B0] STNW .D1T1 A9,*A3++ ; return when len < 8
SPLOOP 2
LDNDW .D2T1 *B4++,A9:A8
NOP 3
NOP
SPKERNEL 0,0
|| STNDW .D1T1 A9:A8,*A3++
BNOP .S2 B3,4
MVC .S2 B2,ILC
ENDPROC(memcpy)
;; SPDX-License-Identifier: GPL-2.0-or-later
;; Copyright (C) 2010 Texas Instruments Incorporated
;; Contributed by Mark Salter <msalter@redhat.com>.
;;
#include <linux/linkage.h>
;; uint64_t __c6xabi_mpyll(uint64_t x, uint64_t y)
;;
;; 64x64 multiply
;; First compute partial results using 32-bit parts of x and y:
;;
;; b63 b32 b31 b0
;; -----------------------------
;; | 1 | 0 |
;; -----------------------------
;;
;; P0 = X0*Y0
;; P1 = X0*Y1 + X1*Y0
;; P2 = X1*Y1
;;
;; result = (P2 << 64) + (P1 << 32) + P0
;;
;; Since the result is also 64-bit, we can skip the P2 term.
.text
ENTRY(__c6xabi_mpyll)
mpy32u .m1x A4,B4,A1:A0 ; X0*Y0
b .s2 B3
|| mpy32u .m2x B5,A4,B1:B0 ; X0*Y1 (don't need upper 32-bits)
|| mpy32u .m1x A5,B4,A3:A2 ; X1*Y0 (don't need upper 32-bits)
nop
nop
mv .s1 A0,A4
add .l1x A2,B0,A5
add .s1 A1,A5,A5
ENDPROC(__c6xabi_mpyll)
;; SPDX-License-Identifier: GPL-2.0-or-later
;; Copyright (C) 2010 Texas Instruments Incorporated
;; Contributed by Mark Salter <msalter@redhat.com>.
;;
;; int64_t __c6xabi_negll(int64_t val)
#include <linux/linkage.h>
.text
ENTRY(__c6xabi_negll)
b .s2 B3
mvk .l1 0,A0
subu .l1 A0,A4,A3:A2
sub .l1 A0,A5,A0
|| ext .s1 A3,24,24,A5
add .l1 A5,A0,A5
mv .s1 A2,A4
ENDPROC(__c6xabi_negll)
;; SPDX-License-Identifier: GPL-2.0-or-later
;; Copyright 2010 Free Software Foundation, Inc.
;; Contributed by Bernd Schmidt <bernds@codesourcery.com>.
;;
#include <linux/linkage.h>
.text
ENTRY(__c6xabi_pop_rts)
lddw .d2t2 *++B15, B3:B2
lddw .d2t1 *++B15, A11:A10
lddw .d2t2 *++B15, B11:B10
lddw .d2t1 *++B15, A13:A12
lddw .d2t2 *++B15, B13:B12
lddw .d2t1 *++B15, A15:A14
|| b .s2 B3
ldw .d2t2 *++B15[2], B14
nop 4
ENDPROC(__c6xabi_pop_rts)
;; SPDX-License-Identifier: GPL-2.0-or-later
;; Copyright 2010 Free Software Foundation, Inc.
;; Contributed by Bernd Schmidt <bernds@codesourcery.com>.
;;
#include <linux/linkage.h>
.text
ENTRY(__c6xabi_push_rts)
stw .d2t2 B14, *B15--[2]
stdw .d2t1 A15:A14, *B15--
|| b .s2x A3
stdw .d2t2 B13:B12, *B15--
stdw .d2t1 A13:A12, *B15--
stdw .d2t2 B11:B10, *B15--
stdw .d2t1 A11:A10, *B15--
stdw .d2t2 B3:B2, *B15--
ENDPROC(__c6xabi_push_rts)
;; SPDX-License-Identifier: GPL-2.0-or-later
;; Copyright 2010 Free Software Foundation, Inc.
;; Contributed by Bernd Schmidt <bernds@codesourcery.com>.
;;
#include <linux/linkage.h>
;; ABI considerations for the divide functions
;; The following registers are call-used:
;; __c6xabi_divi A0,A1,A2,A4,A6,B0,B1,B2,B4,B5
;; __c6xabi_divu A0,A1,A2,A4,A6,B0,B1,B2,B4
;; __c6xabi_remi A1,A2,A4,A5,A6,B0,B1,B2,B4
;; __c6xabi_remu A1,A4,A5,A7,B0,B1,B2,B4
;;
;; In our implementation, divu and remu are leaf functions,
;; while both divi and remi call into divu.
;; A0 is not clobbered by any of the functions.
;; divu does not clobber B2 either, which is taken advantage of
;; in remi.
;; divi uses B5 to hold the original return address during
;; the call to divu.
;; remi uses B2 and A5 to hold the input values during the
;; call to divu. It stores B3 in on the stack.
.text
ENTRY(__c6xabi_remi)
stw .d2t2 B3, *B15--[2]
|| cmpgt .l1 0, A4, A1
|| cmpgt .l2 0, B4, B2
|| mv .s1 A4, A5
|| call .s2 __c6xabi_divu
[A1] neg .l1 A4, A4
|| [B2] neg .l2 B4, B4
|| xor .s2x B2, A1, B0
|| mv .d2 B4, B2
[B0] addkpc .s2 _divu_ret_1, B3, 1
[!B0] addkpc .s2 _divu_ret_2, B3, 1
nop 2
_divu_ret_1:
neg .l1 A4, A4
_divu_ret_2:
ldw .d2t2 *++B15[2], B3
mpy32 .m1x A4, B2, A6
nop 3
ret .s2 B3
sub .l1 A5, A6, A4
nop 4
ENDPROC(__c6xabi_remi)
;; SPDX-License-Identifier: GPL-2.0-or-later
;; Copyright 2010 Free Software Foundation, Inc.
;; Contributed by Bernd Schmidt <bernds@codesourcery.com>.
;;
#include <linux/linkage.h>
;; ABI considerations for the divide functions
;; The following registers are call-used:
;; __c6xabi_divi A0,A1,A2,A4,A6,B0,B1,B2,B4,B5
;; __c6xabi_divu A0,A1,A2,A4,A6,B0,B1,B2,B4
;; __c6xabi_remi A1,A2,A4,A5,A6,B0,B1,B2,B4
;; __c6xabi_remu A1,A4,A5,A7,B0,B1,B2,B4
;;
;; In our implementation, divu and remu are leaf functions,
;; while both divi and remi call into divu.
;; A0 is not clobbered by any of the functions.
;; divu does not clobber B2 either, which is taken advantage of
;; in remi.
;; divi uses B5 to hold the original return address during
;; the call to divu.
;; remi uses B2 and A5 to hold the input values during the
;; call to divu. It stores B3 in on the stack.
.text
ENTRY(__c6xabi_remu)
;; The ABI seems designed to prevent these functions calling each other,
;; so we duplicate most of the divsi3 code here.
mv .s2x A4, B1
lmbd .l2 1, B4, B1
|| [!B1] b .s2 B3 ; RETURN A
|| [!B1] mvk .d2 1, B4
mv .l1x B1, A7
|| shl .s2 B4, B1, B4
cmpltu .l1x A4, B4, A1
[!A1] sub .l1x A4, B4, A4
shru .s2 B4, 1, B4
_remu_loop:
cmpgt .l2 B1, 7, B0
|| [B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
;; RETURN A may happen here (note: must happen before the next branch)
[B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
|| [B0] b .s1 _remu_loop
[B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
[B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
[B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
[B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
[B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
;; loop backwards branch happens here
ret .s2 B3
[B1] subc .l1x A4,B4,A4
|| [B1] add .s2 -1, B1, B1
[B1] subc .l1x A4,B4,A4
extu .s1 A4, A7, A4
nop 2
ENDPROC(__c6xabi_remu)
;; SPDX-License-Identifier: GPL-2.0-or-later
;; Copyright 2010 Free Software Foundation, Inc.
;; Contributed by Bernd Schmidt <bernds@codesourcery.com>.
;;
#include <linux/linkage.h>
.text
ENTRY(__c6xabi_strasgi)
;; This is essentially memcpy, with alignment known to be at least
;; 4, and the size a multiple of 4 greater than or equal to 28.
ldw .d2t1 *B4++, A0
|| mvk .s2 16, B1
ldw .d2t1 *B4++, A1
|| mvk .s2 20, B2
|| sub .d1 A6, 24, A6
ldw .d2t1 *B4++, A5
ldw .d2t1 *B4++, A7
|| mv .l2x A6, B7
ldw .d2t1 *B4++, A8
ldw .d2t1 *B4++, A9
|| mv .s2x A0, B5
|| cmpltu .l2 B2, B7, B0
_strasgi_loop:
stw .d1t2 B5, *A4++
|| [B0] ldw .d2t1 *B4++, A0
|| mv .s2x A1, B5
|| mv .l2 B7, B6
[B0] sub .d2 B6, 24, B7
|| [B0] b .s2 _strasgi_loop
|| cmpltu .l2 B1, B6, B0
[B0] ldw .d2t1 *B4++, A1
|| stw .d1t2 B5, *A4++
|| mv .s2x A5, B5
|| cmpltu .l2 12, B6, B0
[B0] ldw .d2t1 *B4++, A5
|| stw .d1t2 B5, *A4++
|| mv .s2x A7, B5
|| cmpltu .l2 8, B6, B0
[B0] ldw .d2t1 *B4++, A7
|| stw .d1t2 B5, *A4++
|| mv .s2x A8, B5
|| cmpltu .l2 4, B6, B0
[B0] ldw .d2t1 *B4++, A8
|| stw .d1t2 B5, *A4++
|| mv .s2x A9, B5
|| cmpltu .l2 0, B6, B0
[B0] ldw .d2t1 *B4++, A9
|| stw .d1t2 B5, *A4++
|| mv .s2x A0, B5
|| cmpltu .l2 B2, B7, B0
;; loop back branch happens here
cmpltu .l2 B1, B6, B0
|| ret .s2 b3
[B0] stw .d1t1 A1, *A4++
|| cmpltu .l2 12, B6, B0
[B0] stw .d1t1 A5, *A4++
|| cmpltu .l2 8, B6, B0
[B0] stw .d1t1 A7, *A4++
|| cmpltu .l2 4, B6, B0
[B0] stw .d1t1 A8, *A4++
|| cmpltu .l2 0, B6, B0
[B0] stw .d1t1 A9, *A4++
;; return happens here
ENDPROC(__c6xabi_strasgi)
;; SPDX-License-Identifier: GPL-2.0-or-later
;; Copyright 2010 Free Software Foundation, Inc.
;; Contributed by Bernd Schmidt <bernds@codesourcery.com>.
;;
#include <linux/linkage.h>
.text
ENTRY(__c6xabi_strasgi_64plus)
shru .s2x a6, 2, b31
|| mv .s1 a4, a30
|| mv .d2 b4, b30
add .s2 -4, b31, b31
sploopd 1
|| mvc .s2 b31, ilc
ldw .d2t2 *b30++, b31
nop 4
mv .s1x b31,a31
spkernel 6, 0
|| stw .d1t1 a31, *a30++
ret .s2 b3
nop 5
ENDPROC(__c6xabi_strasgi_64plus)
# SPDX-License-Identifier: GPL-2.0-only
#
# Makefile for the linux c6x-specific parts of the memory manager.
#
obj-y := init.o dma-coherent.o
// SPDX-License-Identifier: GPL-2.0-only
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot <aurelien.jacquiot@ti.com>
*
* DMA uncached mapping support.
*
* Using code pulled from ARM
* Copyright (C) 2000-2004 Russell King
*/
#include <linux/slab.h>
#include <linux/bitmap.h>
#include <linux/bitops.h>
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/dma-map-ops.h>
#include <linux/memblock.h>
#include <asm/cacheflush.h>
#include <asm/page.h>
#include <asm/setup.h>
/*
* DMA coherent memory management, can be redefined using the memdma=
* kernel command line
*/
/* none by default */
static phys_addr_t dma_base;
static u32 dma_size;
static u32 dma_pages;
static unsigned long *dma_bitmap;
/* bitmap lock */
static DEFINE_SPINLOCK(dma_lock);
/*
* Return a DMA coherent and contiguous memory chunk from the DMA memory
*/
static inline u32 __alloc_dma_pages(int order)
{
unsigned long flags;
u32 pos;
spin_lock_irqsave(&dma_lock, flags);
pos = bitmap_find_free_region(dma_bitmap, dma_pages, order);
spin_unlock_irqrestore(&dma_lock, flags);
return dma_base + (pos << PAGE_SHIFT);
}
static void __free_dma_pages(u32 addr, int order)
{
unsigned long flags;
u32 pos = (addr - dma_base) >> PAGE_SHIFT;
if (addr < dma_base || (pos + (1 << order)) >= dma_pages) {
printk(KERN_ERR "%s: freeing outside range.\n", __func__);
BUG();
}
spin_lock_irqsave(&dma_lock, flags);
bitmap_release_region(dma_bitmap, pos, order);
spin_unlock_irqrestore(&dma_lock, flags);
}
/*
* Allocate DMA coherent memory space and return both the kernel
* virtual and DMA address for that space.
*/
void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
gfp_t gfp, unsigned long attrs)
{
void *ret;
u32 paddr;
int order;
if (!dma_size || !size)
return NULL;
order = get_count_order(((size - 1) >> PAGE_SHIFT) + 1);
paddr = __alloc_dma_pages(order);
if (handle)
*handle = paddr;
if (!paddr)
return NULL;
ret = phys_to_virt(paddr);
memset(ret, 0, 1 << order);
return ret;
}
/*
* Free DMA coherent memory as defined by the above mapping.
*/
void arch_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle, unsigned long attrs)
{
int order;
if (!dma_size || !size)
return;
order = get_count_order(((size - 1) >> PAGE_SHIFT) + 1);
__free_dma_pages(virt_to_phys(vaddr), order);
}
/*
* Initialise the coherent DMA memory allocator using the given uncached region.
*/
void __init coherent_mem_init(phys_addr_t start, u32 size)
{
if (!size)
return;
printk(KERN_INFO
"Coherent memory (DMA) region start=0x%x size=0x%x\n",
start, size);
dma_base = start;
dma_size = size;
/* allocate bitmap */
dma_pages = dma_size >> PAGE_SHIFT;
if (dma_size & (PAGE_SIZE - 1))
++dma_pages;
dma_bitmap = memblock_alloc(BITS_TO_LONGS(dma_pages) * sizeof(long),
sizeof(long));
if (!dma_bitmap)
panic("%s: Failed to allocate %zu bytes align=0x%zx\n",
__func__, BITS_TO_LONGS(dma_pages) * sizeof(long),
sizeof(long));
}
static void c6x_dma_sync(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
BUG_ON(!valid_dma_direction(dir));
switch (dir) {
case DMA_FROM_DEVICE:
L2_cache_block_invalidate(paddr, paddr + size);
break;
case DMA_TO_DEVICE:
L2_cache_block_writeback(paddr, paddr + size);
break;
case DMA_BIDIRECTIONAL:
L2_cache_block_writeback_invalidate(paddr, paddr + size);
break;
default:
break;
}
}
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
return c6x_dma_sync(paddr, size, dir);
}
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
return c6x_dma_sync(paddr, size, dir);
}
// SPDX-License-Identifier: GPL-2.0-only
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot (aurelien.jacquiot@jaluna.com)
*/
#include <linux/mm.h>
#include <linux/swap.h>
#include <linux/module.h>
#include <linux/memblock.h>
#ifdef CONFIG_BLK_DEV_RAM
#include <linux/blkdev.h>
#endif
#include <linux/initrd.h>
#include <asm/sections.h>
#include <linux/uaccess.h>
/*
* ZERO_PAGE is a special page that is used for zero-initialized
* data and COW.
*/
unsigned long empty_zero_page;
EXPORT_SYMBOL(empty_zero_page);
/*
* paging_init() continues the virtual memory environment setup which
* was begun by the code in arch/head.S.
* The parameters are pointers to where to stick the starting and ending
* addresses of available kernel virtual memory.
*/
void __init paging_init(void)
{
struct pglist_data *pgdat = NODE_DATA(0);
unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, };
empty_zero_page = (unsigned long) memblock_alloc(PAGE_SIZE,
PAGE_SIZE);
if (!empty_zero_page)
panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
__func__, PAGE_SIZE, PAGE_SIZE);
/*
* Set up user data space
*/
set_fs(KERNEL_DS);
/*
* Define zones
*/
max_zone_pfn[ZONE_NORMAL] = memory_end >> PAGE_SHIFT;
free_area_init(max_zone_pfn);
}
void __init mem_init(void)
{
high_memory = (void *)(memory_end & PAGE_MASK);
/* this will put all memory onto the freelists */
memblock_free_all();
mem_init_print_info(NULL);
}
# SPDX-License-Identifier: GPL-2.0
config SOC_TMS320C6455
bool "TMS320C6455"
default n
config SOC_TMS320C6457
bool "TMS320C6457"
default n
config SOC_TMS320C6472
bool "TMS320C6472"
default n
config SOC_TMS320C6474
bool "TMS320C6474"
default n
config SOC_TMS320C6678
bool "TMS320C6678"
default n
# SPDX-License-Identifier: GPL-2.0-only
#
# Makefile for arch/c6x/platforms
#
# Copyright 2010, 2011 Texas Instruments Incorporated
#
obj-y = cache.o megamod-pic.o pll.o plldata.o timer64.o
obj-y += dscr.o
# SoC objects
obj-$(CONFIG_SOC_TMS320C6455) += emif.o
obj-$(CONFIG_SOC_TMS320C6457) += emif.o
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2011 Texas Instruments Incorporated
* Author: Mark Salter <msalter@redhat.com>
*/
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/io.h>
#include <asm/cache.h>
#include <asm/soc.h>
/*
* Internal Memory Control Registers for caches
*/
#define IMCR_CCFG 0x0000
#define IMCR_L1PCFG 0x0020
#define IMCR_L1PCC 0x0024
#define IMCR_L1DCFG 0x0040
#define IMCR_L1DCC 0x0044
#define IMCR_L2ALLOC0 0x2000
#define IMCR_L2ALLOC1 0x2004
#define IMCR_L2ALLOC2 0x2008
#define IMCR_L2ALLOC3 0x200c
#define IMCR_L2WBAR 0x4000
#define IMCR_L2WWC 0x4004
#define IMCR_L2WIBAR 0x4010
#define IMCR_L2WIWC 0x4014
#define IMCR_L2IBAR 0x4018
#define IMCR_L2IWC 0x401c
#define IMCR_L1PIBAR 0x4020
#define IMCR_L1PIWC 0x4024
#define IMCR_L1DWIBAR 0x4030
#define IMCR_L1DWIWC 0x4034
#define IMCR_L1DWBAR 0x4040
#define IMCR_L1DWWC 0x4044
#define IMCR_L1DIBAR 0x4048
#define IMCR_L1DIWC 0x404c
#define IMCR_L2WB 0x5000
#define IMCR_L2WBINV 0x5004
#define IMCR_L2INV 0x5008
#define IMCR_L1PINV 0x5028
#define IMCR_L1DWB 0x5040
#define IMCR_L1DWBINV 0x5044
#define IMCR_L1DINV 0x5048
#define IMCR_MAR_BASE 0x8000
#define IMCR_MAR96_111 0x8180
#define IMCR_MAR128_191 0x8200
#define IMCR_MAR224_239 0x8380
#define IMCR_L2MPFAR 0xa000
#define IMCR_L2MPFSR 0xa004
#define IMCR_L2MPFCR 0xa008
#define IMCR_L2MPLK0 0xa100
#define IMCR_L2MPLK1 0xa104
#define IMCR_L2MPLK2 0xa108
#define IMCR_L2MPLK3 0xa10c
#define IMCR_L2MPLKCMD 0xa110
#define IMCR_L2MPLKSTAT 0xa114
#define IMCR_L2MPPA_BASE 0xa200
#define IMCR_L1PMPFAR 0xa400
#define IMCR_L1PMPFSR 0xa404
#define IMCR_L1PMPFCR 0xa408
#define IMCR_L1PMPLK0 0xa500
#define IMCR_L1PMPLK1 0xa504
#define IMCR_L1PMPLK2 0xa508
#define IMCR_L1PMPLK3 0xa50c
#define IMCR_L1PMPLKCMD 0xa510
#define IMCR_L1PMPLKSTAT 0xa514
#define IMCR_L1PMPPA_BASE 0xa600
#define IMCR_L1DMPFAR 0xac00
#define IMCR_L1DMPFSR 0xac04
#define IMCR_L1DMPFCR 0xac08
#define IMCR_L1DMPLK0 0xad00
#define IMCR_L1DMPLK1 0xad04
#define IMCR_L1DMPLK2 0xad08
#define IMCR_L1DMPLK3 0xad0c
#define IMCR_L1DMPLKCMD 0xad10
#define IMCR_L1DMPLKSTAT 0xad14
#define IMCR_L1DMPPA_BASE 0xae00
#define IMCR_L2PDWAKE0 0xc040
#define IMCR_L2PDWAKE1 0xc044
#define IMCR_L2PDSLEEP0 0xc050
#define IMCR_L2PDSLEEP1 0xc054
#define IMCR_L2PDSTAT0 0xc060
#define IMCR_L2PDSTAT1 0xc064
/*
* CCFG register values and bits
*/
#define L2MODE_0K_CACHE 0x0
#define L2MODE_32K_CACHE 0x1
#define L2MODE_64K_CACHE 0x2
#define L2MODE_128K_CACHE 0x3
#define L2MODE_256K_CACHE 0x7
#define L2PRIO_URGENT 0x0
#define L2PRIO_HIGH 0x1
#define L2PRIO_MEDIUM 0x2
#define L2PRIO_LOW 0x3
#define CCFG_ID 0x100 /* Invalidate L1P bit */
#define CCFG_IP 0x200 /* Invalidate L1D bit */
static void __iomem *cache_base;
/*
* L1 & L2 caches generic functions
*/
#define imcr_get(reg) soc_readl(cache_base + (reg))
#define imcr_set(reg, value) \
do { \
soc_writel((value), cache_base + (reg)); \
soc_readl(cache_base + (reg)); \
} while (0)
static void cache_block_operation_wait(unsigned int wc_reg)
{
/* Wait for completion */
while (imcr_get(wc_reg))
cpu_relax();
}
static DEFINE_SPINLOCK(cache_lock);
/*
* Generic function to perform a block cache operation as
* invalidate or writeback/invalidate
*/
static void cache_block_operation(unsigned int *start,
unsigned int *end,
unsigned int bar_reg,
unsigned int wc_reg)
{
unsigned long flags;
unsigned int wcnt =
(L2_CACHE_ALIGN_CNT((unsigned int) end)
- L2_CACHE_ALIGN_LOW((unsigned int) start)) >> 2;
unsigned int wc = 0;
for (; wcnt; wcnt -= wc, start += wc) {
loop:
spin_lock_irqsave(&cache_lock, flags);
/*
* If another cache operation is occurring
*/
if (unlikely(imcr_get(wc_reg))) {
spin_unlock_irqrestore(&cache_lock, flags);
/* Wait for previous operation completion */
cache_block_operation_wait(wc_reg);
/* Try again */
goto loop;
}
imcr_set(bar_reg, L2_CACHE_ALIGN_LOW((unsigned int) start));
if (wcnt > 0xffff)
wc = 0xffff;
else
wc = wcnt;
/* Set word count value in the WC register */
imcr_set(wc_reg, wc & 0xffff);
spin_unlock_irqrestore(&cache_lock, flags);
/* Wait for completion */
cache_block_operation_wait(wc_reg);
}
}
static void cache_block_operation_nowait(unsigned int *start,
unsigned int *end,
unsigned int bar_reg,
unsigned int wc_reg)
{
unsigned long flags;
unsigned int wcnt =
(L2_CACHE_ALIGN_CNT((unsigned int) end)
- L2_CACHE_ALIGN_LOW((unsigned int) start)) >> 2;
unsigned int wc = 0;
for (; wcnt; wcnt -= wc, start += wc) {
spin_lock_irqsave(&cache_lock, flags);
imcr_set(bar_reg, L2_CACHE_ALIGN_LOW((unsigned int) start));
if (wcnt > 0xffff)
wc = 0xffff;
else
wc = wcnt;
/* Set word count value in the WC register */
imcr_set(wc_reg, wc & 0xffff);
spin_unlock_irqrestore(&cache_lock, flags);
/* Don't wait for completion on last cache operation */
if (wcnt > 0xffff)
cache_block_operation_wait(wc_reg);
}
}
/*
* L1 caches management
*/
/*
* Disable L1 caches
*/
void L1_cache_off(void)
{
unsigned int dummy;
imcr_set(IMCR_L1PCFG, 0);
dummy = imcr_get(IMCR_L1PCFG);
imcr_set(IMCR_L1DCFG, 0);
dummy = imcr_get(IMCR_L1DCFG);
}
/*
* Enable L1 caches
*/
void L1_cache_on(void)
{
unsigned int dummy;
imcr_set(IMCR_L1PCFG, 7);
dummy = imcr_get(IMCR_L1PCFG);
imcr_set(IMCR_L1DCFG, 7);
dummy = imcr_get(IMCR_L1DCFG);
}
/*
* L1P global-invalidate all
*/
void L1P_cache_global_invalidate(void)
{
unsigned int set = 1;
imcr_set(IMCR_L1PINV, set);
while (imcr_get(IMCR_L1PINV) & 1)
cpu_relax();
}
/*
* L1D global-invalidate all
*
* Warning: this operation causes all updated data in L1D to
* be discarded rather than written back to the lower levels of
* memory
*/
void L1D_cache_global_invalidate(void)
{
unsigned int set = 1;
imcr_set(IMCR_L1DINV, set);
while (imcr_get(IMCR_L1DINV) & 1)
cpu_relax();
}
void L1D_cache_global_writeback(void)
{
unsigned int set = 1;
imcr_set(IMCR_L1DWB, set);
while (imcr_get(IMCR_L1DWB) & 1)
cpu_relax();
}
void L1D_cache_global_writeback_invalidate(void)
{
unsigned int set = 1;
imcr_set(IMCR_L1DWBINV, set);
while (imcr_get(IMCR_L1DWBINV) & 1)
cpu_relax();
}
/*
* L2 caches management
*/
/*
* Set L2 operation mode
*/
void L2_cache_set_mode(unsigned int mode)
{
unsigned int ccfg = imcr_get(IMCR_CCFG);
/* Clear and set the L2MODE bits in CCFG */
ccfg &= ~7;
ccfg |= (mode & 7);
imcr_set(IMCR_CCFG, ccfg);
ccfg = imcr_get(IMCR_CCFG);
}
/*
* L2 global-writeback and global-invalidate all
*/
void L2_cache_global_writeback_invalidate(void)
{
imcr_set(IMCR_L2WBINV, 1);
while (imcr_get(IMCR_L2WBINV))
cpu_relax();
}
/*
* L2 global-writeback all
*/
void L2_cache_global_writeback(void)
{
imcr_set(IMCR_L2WB, 1);
while (imcr_get(IMCR_L2WB))
cpu_relax();
}
/*
* Cacheability controls
*/
void enable_caching(unsigned long start, unsigned long end)
{
unsigned int mar = IMCR_MAR_BASE + ((start >> 24) << 2);
unsigned int mar_e = IMCR_MAR_BASE + ((end >> 24) << 2);
for (; mar <= mar_e; mar += 4)
imcr_set(mar, imcr_get(mar) | 1);
}
void disable_caching(unsigned long start, unsigned long end)
{
unsigned int mar = IMCR_MAR_BASE + ((start >> 24) << 2);
unsigned int mar_e = IMCR_MAR_BASE + ((end >> 24) << 2);
for (; mar <= mar_e; mar += 4)
imcr_set(mar, imcr_get(mar) & ~1);
}
/*
* L1 block operations
*/
void L1P_cache_block_invalidate(unsigned int start, unsigned int end)
{
cache_block_operation((unsigned int *) start,
(unsigned int *) end,
IMCR_L1PIBAR, IMCR_L1PIWC);
}
EXPORT_SYMBOL(L1P_cache_block_invalidate);
void L1D_cache_block_invalidate(unsigned int start, unsigned int end)
{
cache_block_operation((unsigned int *) start,
(unsigned int *) end,
IMCR_L1DIBAR, IMCR_L1DIWC);
}
void L1D_cache_block_writeback_invalidate(unsigned int start, unsigned int end)
{
cache_block_operation((unsigned int *) start,
(unsigned int *) end,
IMCR_L1DWIBAR, IMCR_L1DWIWC);
}
void L1D_cache_block_writeback(unsigned int start, unsigned int end)
{
cache_block_operation((unsigned int *) start,
(unsigned int *) end,
IMCR_L1DWBAR, IMCR_L1DWWC);
}
EXPORT_SYMBOL(L1D_cache_block_writeback);
/*
* L2 block operations
*/
void L2_cache_block_invalidate(unsigned int start, unsigned int end)
{
cache_block_operation((unsigned int *) start,
(unsigned int *) end,
IMCR_L2IBAR, IMCR_L2IWC);
}
void L2_cache_block_writeback(unsigned int start, unsigned int end)
{
cache_block_operation((unsigned int *) start,
(unsigned int *) end,
IMCR_L2WBAR, IMCR_L2WWC);
}
void L2_cache_block_writeback_invalidate(unsigned int start, unsigned int end)
{
cache_block_operation((unsigned int *) start,
(unsigned int *) end,
IMCR_L2WIBAR, IMCR_L2WIWC);
}
void L2_cache_block_invalidate_nowait(unsigned int start, unsigned int end)
{
cache_block_operation_nowait((unsigned int *) start,
(unsigned int *) end,
IMCR_L2IBAR, IMCR_L2IWC);
}
void L2_cache_block_writeback_nowait(unsigned int start, unsigned int end)
{
cache_block_operation_nowait((unsigned int *) start,
(unsigned int *) end,
IMCR_L2WBAR, IMCR_L2WWC);
}
void L2_cache_block_writeback_invalidate_nowait(unsigned int start,
unsigned int end)
{
cache_block_operation_nowait((unsigned int *) start,
(unsigned int *) end,
IMCR_L2WIBAR, IMCR_L2WIWC);
}
/*
* L1 and L2 caches configuration
*/
void __init c6x_cache_init(void)
{
struct device_node *node;
node = of_find_compatible_node(NULL, NULL, "ti,c64x+cache");
if (!node)
return;
cache_base = of_iomap(node, 0);
of_node_put(node);
if (!cache_base)
return;
/* Set L2 caches on the the whole L2 SRAM memory */
L2_cache_set_mode(L2MODE_SIZE);
/* Enable L1 */
L1_cache_on();
}
// SPDX-License-Identifier: GPL-2.0-only
/*
* Device State Control Registers driver
*
* Copyright (C) 2011 Texas Instruments Incorporated
* Author: Mark Salter <msalter@redhat.com>
*/
/*
* The Device State Control Registers (DSCR) provide SoC level control over
* a number of peripherals. Details vary considerably among the various SoC
* parts. In general, the DSCR block will provide one or more configuration
* registers often protected by a lock register. One or more key values must
* be written to a lock register in order to unlock the configuration register.
* The configuration register may be used to enable (and disable in some
* cases) SoC pin drivers, peripheral clock sources (internal or pin), etc.
* In some cases, a configuration register is write once or the individual
* bits are write once. That is, you may be able to enable a device, but
* will not be able to disable it.
*
* In addition to device configuration, the DSCR block may provide registers
* which are used to reset SoC peripherals, provide device ID information,
* provide MAC addresses, and other miscellaneous functions.
*/
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/module.h>
#include <linux/io.h>
#include <linux/delay.h>
#include <asm/soc.h>
#include <asm/dscr.h>
#define MAX_DEVSTATE_IDS 32
#define MAX_DEVCTL_REGS 8
#define MAX_DEVSTAT_REGS 8
#define MAX_LOCKED_REGS 4
#define MAX_SOC_EMACS 2
struct rmii_reset_reg {
u32 reg;
u32 mask;
};
/*
* Some registerd may be locked. In order to write to these
* registers, the key value must first be written to the lockreg.
*/
struct locked_reg {
u32 reg; /* offset from base */
u32 lockreg; /* offset from base */
u32 key; /* unlock key */
};
/*
* This describes a contiguous area of like control bits used to enable/disable
* SoC devices. Each controllable device is given an ID which is used by the
* individual device drivers to control the device state. These IDs start at
* zero and are assigned sequentially to the control bitfield ranges described
* by this structure.
*/
struct devstate_ctl_reg {
u32 reg; /* register holding the control bits */
u8 start_id; /* start id of this range */
u8 num_ids; /* number of devices in this range */
u8 enable_only; /* bits are write-once to enable only */
u8 enable; /* value used to enable device */
u8 disable; /* value used to disable device */
u8 shift; /* starting (rightmost) bit in range */
u8 nbits; /* number of bits per device */
};
/*
* This describes a region of status bits indicating the state of
* various devices. This is used internally to wait for status
* change completion when enabling/disabling a device. Status is
* optional and not all device controls will have a corresponding
* status.
*/
struct devstate_stat_reg {
u32 reg; /* register holding the status bits */
u8 start_id; /* start id of this range */
u8 num_ids; /* number of devices in this range */
u8 enable; /* value indicating enabled state */
u8 disable; /* value indicating disabled state */
u8 shift; /* starting (rightmost) bit in range */
u8 nbits; /* number of bits per device */
};
struct devstate_info {
struct devstate_ctl_reg *ctl;
struct devstate_stat_reg *stat;
};
/* These are callbacks to SOC-specific code. */
struct dscr_ops {
void (*init)(struct device_node *node);
};
struct dscr_regs {
spinlock_t lock;
void __iomem *base;
u32 kick_reg[2];
u32 kick_key[2];
struct locked_reg locked[MAX_LOCKED_REGS];
struct devstate_info devstate_info[MAX_DEVSTATE_IDS];
struct rmii_reset_reg rmii_resets[MAX_SOC_EMACS];
struct devstate_ctl_reg devctl[MAX_DEVCTL_REGS];
struct devstate_stat_reg devstat[MAX_DEVSTAT_REGS];
};
static struct dscr_regs dscr;
static struct locked_reg *find_locked_reg(u32 reg)
{
int i;
for (i = 0; i < MAX_LOCKED_REGS; i++)
if (dscr.locked[i].key && reg == dscr.locked[i].reg)
return &dscr.locked[i];
return NULL;
}
/*
* Write to a register with one lock
*/
static void dscr_write_locked1(u32 reg, u32 val,
u32 lock, u32 key)
{
void __iomem *reg_addr = dscr.base + reg;
void __iomem *lock_addr = dscr.base + lock;
/*
* For some registers, the lock is relocked after a short number
* of cycles. We have to put the lock write and register write in
* the same fetch packet to meet this timing. The .align ensures
* the two stw instructions are in the same fetch packet.
*/
asm volatile ("b .s2 0f\n"
"nop 5\n"
" .align 5\n"
"0:\n"
"stw .D1T2 %3,*%2\n"
"stw .D1T2 %1,*%0\n"
:
: "a"(reg_addr), "b"(val), "a"(lock_addr), "b"(key)
);
/* in case the hw doesn't reset the lock */
soc_writel(0, lock_addr);
}
/*
* Write to a register protected by two lock registers
*/
static void dscr_write_locked2(u32 reg, u32 val,
u32 lock0, u32 key0,
u32 lock1, u32 key1)
{
soc_writel(key0, dscr.base + lock0);
soc_writel(key1, dscr.base + lock1);
soc_writel(val, dscr.base + reg);
soc_writel(0, dscr.base + lock0);
soc_writel(0, dscr.base + lock1);
}
static void dscr_write(u32 reg, u32 val)
{
struct locked_reg *lock;
lock = find_locked_reg(reg);
if (lock)
dscr_write_locked1(reg, val, lock->lockreg, lock->key);
else if (dscr.kick_key[0])
dscr_write_locked2(reg, val, dscr.kick_reg[0], dscr.kick_key[0],
dscr.kick_reg[1], dscr.kick_key[1]);
else
soc_writel(val, dscr.base + reg);
}
/*
* Drivers can use this interface to enable/disable SoC IP blocks.
*/
void dscr_set_devstate(int id, enum dscr_devstate_t state)
{
struct devstate_ctl_reg *ctl;
struct devstate_stat_reg *stat;
struct devstate_info *info;
u32 ctl_val, val;
int ctl_shift, ctl_mask;
unsigned long flags;
if (!dscr.base)
return;
if (id < 0 || id >= MAX_DEVSTATE_IDS)
return;
info = &dscr.devstate_info[id];
ctl = info->ctl;
stat = info->stat;
if (ctl == NULL)
return;
ctl_shift = ctl->shift + ctl->nbits * (id - ctl->start_id);
ctl_mask = ((1 << ctl->nbits) - 1) << ctl_shift;
switch (state) {
case DSCR_DEVSTATE_ENABLED:
ctl_val = ctl->enable << ctl_shift;
break;
case DSCR_DEVSTATE_DISABLED:
if (ctl->enable_only)
return;
ctl_val = ctl->disable << ctl_shift;
break;
default:
return;
}
spin_lock_irqsave(&dscr.lock, flags);
val = soc_readl(dscr.base + ctl->reg);
val &= ~ctl_mask;
val |= ctl_val;
dscr_write(ctl->reg, val);
spin_unlock_irqrestore(&dscr.lock, flags);
if (!stat)
return;
ctl_shift = stat->shift + stat->nbits * (id - stat->start_id);
if (state == DSCR_DEVSTATE_ENABLED)
ctl_val = stat->enable;
else
ctl_val = stat->disable;
do {
val = soc_readl(dscr.base + stat->reg);
val >>= ctl_shift;
val &= ((1 << stat->nbits) - 1);
} while (val != ctl_val);
}
EXPORT_SYMBOL(dscr_set_devstate);
/*
* Drivers can use this to reset RMII module.
*/
void dscr_rmii_reset(int id, int assert)
{
struct rmii_reset_reg *r;
unsigned long flags;
u32 val;
if (id < 0 || id >= MAX_SOC_EMACS)
return;
r = &dscr.rmii_resets[id];
if (r->mask == 0)
return;
spin_lock_irqsave(&dscr.lock, flags);
val = soc_readl(dscr.base + r->reg);
if (assert)
dscr_write(r->reg, val | r->mask);
else
dscr_write(r->reg, val & ~(r->mask));
spin_unlock_irqrestore(&dscr.lock, flags);
}
EXPORT_SYMBOL(dscr_rmii_reset);
static void __init dscr_parse_devstat(struct device_node *node,
void __iomem *base)
{
u32 val;
int err;
err = of_property_read_u32_array(node, "ti,dscr-devstat", &val, 1);
if (!err)
c6x_devstat = soc_readl(base + val);
printk(KERN_INFO "DEVSTAT: %08x\n", c6x_devstat);
}
static void __init dscr_parse_silicon_rev(struct device_node *node,
void __iomem *base)
{
u32 vals[3];
int err;
err = of_property_read_u32_array(node, "ti,dscr-silicon-rev", vals, 3);
if (!err) {
c6x_silicon_rev = soc_readl(base + vals[0]);
c6x_silicon_rev >>= vals[1];
c6x_silicon_rev &= vals[2];
}
}
/*
* Some SoCs will have a pair of fuse registers which hold
* an ethernet MAC address. The "ti,dscr-mac-fuse-regs"
* property is a mapping from fuse register bytes to MAC
* address bytes. The expected format is:
*
* ti,dscr-mac-fuse-regs = <reg0 b3 b2 b1 b0
* reg1 b3 b2 b1 b0>
*
* reg0 and reg1 are the offsets of the two fuse registers.
* b3-b0 positionally represent bytes within the fuse register.
* b3 is the most significant byte and b0 is the least.
* Allowable values for b3-b0 are:
*
* 0 = fuse register byte not used in MAC address
* 1-6 = index+1 into c6x_fuse_mac[]
*/
static void __init dscr_parse_mac_fuse(struct device_node *node,
void __iomem *base)
{
u32 vals[10], fuse;
int f, i, j, err;
err = of_property_read_u32_array(node, "ti,dscr-mac-fuse-regs",
vals, 10);
if (err)
return;
for (f = 0; f < 2; f++) {
fuse = soc_readl(base + vals[f * 5]);
for (j = (f * 5) + 1, i = 24; i >= 0; i -= 8, j++)
if (vals[j] && vals[j] <= 6)
c6x_fuse_mac[vals[j] - 1] = fuse >> i;
}
}
static void __init dscr_parse_rmii_resets(struct device_node *node,
void __iomem *base)
{
const __be32 *p;
int i, size;
/* look for RMII reset registers */
p = of_get_property(node, "ti,dscr-rmii-resets", &size);
if (p) {
/* parse all the reg/mask pairs we can handle */
size /= (sizeof(*p) * 2);
if (size > MAX_SOC_EMACS)
size = MAX_SOC_EMACS;
for (i = 0; i < size; i++) {
dscr.rmii_resets[i].reg = be32_to_cpup(p++);
dscr.rmii_resets[i].mask = be32_to_cpup(p++);
}
}
}
static void __init dscr_parse_privperm(struct device_node *node,
void __iomem *base)
{
u32 vals[2];
int err;
err = of_property_read_u32_array(node, "ti,dscr-privperm", vals, 2);
if (err)
return;
dscr_write(vals[0], vals[1]);
}
/*
* SoCs may have "locked" DSCR registers which can only be written
* to only after writing a key value to a lock registers. These
* regisers can be described with the "ti,dscr-locked-regs" property.
* This property provides a list of register descriptions with each
* description consisting of three values.
*
* ti,dscr-locked-regs = <reg0 lockreg0 key0
* ...
* regN lockregN keyN>;
*
* reg is the offset of the locked register
* lockreg is the offset of the lock register
* key is the unlock key written to lockreg
*
*/
static void __init dscr_parse_locked_regs(struct device_node *node,
void __iomem *base)
{
struct locked_reg *r;
const __be32 *p;
int i, size;
p = of_get_property(node, "ti,dscr-locked-regs", &size);
if (p) {
/* parse all the register descriptions we can handle */
size /= (sizeof(*p) * 3);
if (size > MAX_LOCKED_REGS)
size = MAX_LOCKED_REGS;
for (i = 0; i < size; i++) {
r = &dscr.locked[i];
r->reg = be32_to_cpup(p++);
r->lockreg = be32_to_cpup(p++);
r->key = be32_to_cpup(p++);
}
}
}
/*
* SoCs may have DSCR registers which are only write enabled after
* writing specific key values to two registers. The two key registers
* and the key values can be parsed from a "ti,dscr-kick-regs"
* propety with the following layout:
*
* ti,dscr-kick-regs = <kickreg0 key0 kickreg1 key1>
*
* kickreg is the offset of the "kick" register
* key is the value which unlocks writing for protected regs
*/
static void __init dscr_parse_kick_regs(struct device_node *node,
void __iomem *base)
{
u32 vals[4];
int err;
err = of_property_read_u32_array(node, "ti,dscr-kick-regs", vals, 4);
if (!err) {
dscr.kick_reg[0] = vals[0];
dscr.kick_key[0] = vals[1];
dscr.kick_reg[1] = vals[2];
dscr.kick_key[1] = vals[3];
}
}
/*
* SoCs may provide controls to enable/disable individual IP blocks. These
* controls in the DSCR usually control pin drivers but also may control
* clocking and or resets. The device tree is used to describe the bitfields
* in registers used to control device state. The number of bits and their
* values may vary even within the same register.
*
* The layout of these bitfields is described by the ti,dscr-devstate-ctl-regs
* property. This property is a list where each element describes a contiguous
* range of control fields with like properties. Each element of the list
* consists of 7 cells with the following values:
*
* start_id num_ids reg enable disable start_bit nbits
*
* start_id is device id for the first device control in the range
* num_ids is the number of device controls in the range
* reg is the offset of the register holding the control bits
* enable is the value to enable a device
* disable is the value to disable a device (0xffffffff if cannot disable)
* start_bit is the bit number of the first bit in the range
* nbits is the number of bits per device control
*/
static void __init dscr_parse_devstate_ctl_regs(struct device_node *node,
void __iomem *base)
{
struct devstate_ctl_reg *r;
const __be32 *p;
int i, j, size;
p = of_get_property(node, "ti,dscr-devstate-ctl-regs", &size);
if (p) {
/* parse all the ranges we can handle */
size /= (sizeof(*p) * 7);
if (size > MAX_DEVCTL_REGS)
size = MAX_DEVCTL_REGS;
for (i = 0; i < size; i++) {
r = &dscr.devctl[i];
r->start_id = be32_to_cpup(p++);
r->num_ids = be32_to_cpup(p++);
r->reg = be32_to_cpup(p++);
r->enable = be32_to_cpup(p++);
r->disable = be32_to_cpup(p++);
if (r->disable == 0xffffffff)
r->enable_only = 1;
r->shift = be32_to_cpup(p++);
r->nbits = be32_to_cpup(p++);
for (j = r->start_id;
j < (r->start_id + r->num_ids);
j++)
dscr.devstate_info[j].ctl = r;
}
}
}
/*
* SoCs may provide status registers indicating the state (enabled/disabled) of
* devices on the SoC. The device tree is used to describe the bitfields in
* registers used to provide device status. The number of bits and their
* values used to provide status may vary even within the same register.
*
* The layout of these bitfields is described by the ti,dscr-devstate-stat-regs
* property. This property is a list where each element describes a contiguous
* range of status fields with like properties. Each element of the list
* consists of 7 cells with the following values:
*
* start_id num_ids reg enable disable start_bit nbits
*
* start_id is device id for the first device status in the range
* num_ids is the number of devices covered by the range
* reg is the offset of the register holding the status bits
* enable is the value indicating device is enabled
* disable is the value indicating device is disabled
* start_bit is the bit number of the first bit in the range
* nbits is the number of bits per device status
*/
static void __init dscr_parse_devstate_stat_regs(struct device_node *node,
void __iomem *base)
{
struct devstate_stat_reg *r;
const __be32 *p;
int i, j, size;
p = of_get_property(node, "ti,dscr-devstate-stat-regs", &size);
if (p) {
/* parse all the ranges we can handle */
size /= (sizeof(*p) * 7);
if (size > MAX_DEVSTAT_REGS)
size = MAX_DEVSTAT_REGS;
for (i = 0; i < size; i++) {
r = &dscr.devstat[i];
r->start_id = be32_to_cpup(p++);
r->num_ids = be32_to_cpup(p++);
r->reg = be32_to_cpup(p++);
r->enable = be32_to_cpup(p++);
r->disable = be32_to_cpup(p++);
r->shift = be32_to_cpup(p++);
r->nbits = be32_to_cpup(p++);
for (j = r->start_id;
j < (r->start_id + r->num_ids);
j++)
dscr.devstate_info[j].stat = r;
}
}
}
static struct of_device_id dscr_ids[] __initdata = {
{ .compatible = "ti,c64x+dscr" },
{}
};
/*
* Probe for DSCR area.
*
* This has to be done early on in case timer or interrupt controller
* needs something. e.g. On C6455 SoC, timer must be enabled through
* DSCR before it is functional.
*/
void __init dscr_probe(void)
{
struct device_node *node;
void __iomem *base;
spin_lock_init(&dscr.lock);
node = of_find_matching_node(NULL, dscr_ids);
if (!node)
return;
base = of_iomap(node, 0);
if (!base) {
of_node_put(node);
return;
}
dscr.base = base;
dscr_parse_devstat(node, base);
dscr_parse_silicon_rev(node, base);
dscr_parse_mac_fuse(node, base);
dscr_parse_rmii_resets(node, base);
dscr_parse_locked_regs(node, base);
dscr_parse_kick_regs(node, base);
dscr_parse_devstate_ctl_regs(node, base);
dscr_parse_devstate_stat_regs(node, base);
dscr_parse_privperm(node, base);
}
// SPDX-License-Identifier: GPL-2.0-only
/*
* External Memory Interface
*
* Copyright (C) 2011 Texas Instruments Incorporated
* Author: Mark Salter <msalter@redhat.com>
*/
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/io.h>
#include <asm/soc.h>
#include <asm/dscr.h>
#define NUM_EMIFA_CHIP_ENABLES 4
struct emifa_regs {
u32 midr;
u32 stat;
u32 reserved1[6];
u32 bprio;
u32 reserved2[23];
u32 cecfg[NUM_EMIFA_CHIP_ENABLES];
u32 reserved3[4];
u32 awcc;
u32 reserved4[7];
u32 intraw;
u32 intmsk;
u32 intmskset;
u32 intmskclr;
};
static struct of_device_id emifa_match[] __initdata = {
{ .compatible = "ti,c64x+emifa" },
{}
};
/*
* Parse device tree for existence of an EMIF (External Memory Interface)
* and initialize it if found.
*/
static int __init c6x_emifa_init(void)
{
struct emifa_regs __iomem *regs;
struct device_node *node;
const __be32 *p;
u32 val;
int i, len, err;
node = of_find_matching_node(NULL, emifa_match);
if (!node)
return 0;
regs = of_iomap(node, 0);
if (!regs)
return 0;
/* look for a dscr-based enable for emifa pin buffers */
err = of_property_read_u32_array(node, "ti,dscr-dev-enable", &val, 1);
if (!err)
dscr_set_devstate(val, DSCR_DEVSTATE_ENABLED);
/* set up the chip enables */
p = of_get_property(node, "ti,emifa-ce-config", &len);
if (p) {
len /= sizeof(u32);
if (len > NUM_EMIFA_CHIP_ENABLES)
len = NUM_EMIFA_CHIP_ENABLES;
for (i = 0; i <= len; i++)
soc_writel(be32_to_cpup(&p[i]), &regs->cecfg[i]);
}
err = of_property_read_u32_array(node, "ti,emifa-burst-priority", &val, 1);
if (!err)
soc_writel(val, &regs->bprio);
err = of_property_read_u32_array(node, "ti,emifa-async-wait-control", &val, 1);
if (!err)
soc_writel(val, &regs->awcc);
iounmap(regs);
of_node_put(node);
return 0;
}
pure_initcall(c6x_emifa_init);
// SPDX-License-Identifier: GPL-2.0-only
/*
* Support for C64x+ Megamodule Interrupt Controller
*
* Copyright (C) 2010, 2011 Texas Instruments Incorporated
* Contributed by: Mark Salter <msalter@redhat.com>
*/
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/of_address.h>
#include <linux/slab.h>
#include <asm/soc.h>
#include <asm/megamod-pic.h>
#define NR_COMBINERS 4
#define NR_MUX_OUTPUTS 12
#define IRQ_UNMAPPED 0xffff
/*
* Megamodule Interrupt Controller register layout
*/
struct megamod_regs {
u32 evtflag[8];
u32 evtset[8];
u32 evtclr[8];
u32 reserved0[8];
u32 evtmask[8];
u32 mevtflag[8];
u32 expmask[8];
u32 mexpflag[8];
u32 intmux_unused;
u32 intmux[7];
u32 reserved1[8];
u32 aegmux[2];
u32 reserved2[14];
u32 intxstat;
u32 intxclr;
u32 intdmask;
u32 reserved3[13];
u32 evtasrt;
};
struct megamod_pic {
struct irq_domain *irqhost;
struct megamod_regs __iomem *regs;
raw_spinlock_t lock;
/* hw mux mapping */
unsigned int output_to_irq[NR_MUX_OUTPUTS];
};
static struct megamod_pic *mm_pic;
struct megamod_cascade_data {
struct megamod_pic *pic;
int index;
};
static struct megamod_cascade_data cascade_data[NR_COMBINERS];
static void mask_megamod(struct irq_data *data)
{
struct megamod_pic *pic = irq_data_get_irq_chip_data(data);
irq_hw_number_t src = irqd_to_hwirq(data);
u32 __iomem *evtmask = &pic->regs->evtmask[src / 32];
raw_spin_lock(&pic->lock);
soc_writel(soc_readl(evtmask) | (1 << (src & 31)), evtmask);
raw_spin_unlock(&pic->lock);
}
static void unmask_megamod(struct irq_data *data)
{
struct megamod_pic *pic = irq_data_get_irq_chip_data(data);
irq_hw_number_t src = irqd_to_hwirq(data);
u32 __iomem *evtmask = &pic->regs->evtmask[src / 32];
raw_spin_lock(&pic->lock);
soc_writel(soc_readl(evtmask) & ~(1 << (src & 31)), evtmask);
raw_spin_unlock(&pic->lock);
}
static struct irq_chip megamod_chip = {
.name = "megamod",
.irq_mask = mask_megamod,
.irq_unmask = unmask_megamod,
};
static void megamod_irq_cascade(struct irq_desc *desc)
{
struct megamod_cascade_data *cascade;
struct megamod_pic *pic;
unsigned int irq;
u32 events;
int n, idx;
cascade = irq_desc_get_handler_data(desc);
pic = cascade->pic;
idx = cascade->index;
while ((events = soc_readl(&pic->regs->mevtflag[idx])) != 0) {
n = __ffs(events);
irq = irq_linear_revmap(pic->irqhost, idx * 32 + n);
soc_writel(1 << n, &pic->regs->evtclr[idx]);
generic_handle_irq(irq);
}
}
static int megamod_map(struct irq_domain *h, unsigned int virq,
irq_hw_number_t hw)
{
struct megamod_pic *pic = h->host_data;
int i;
/* We shouldn't see a hwirq which is muxed to core controller */
for (i = 0; i < NR_MUX_OUTPUTS; i++)
if (pic->output_to_irq[i] == hw)
return -1;
irq_set_chip_data(virq, pic);
irq_set_chip_and_handler(virq, &megamod_chip, handle_level_irq);
/* Set default irq type */
irq_set_irq_type(virq, IRQ_TYPE_NONE);
return 0;
}
static const struct irq_domain_ops megamod_domain_ops = {
.map = megamod_map,
.xlate = irq_domain_xlate_onecell,
};
static void __init set_megamod_mux(struct megamod_pic *pic, int src, int output)
{
int index, offset;
u32 val;
if (src < 0 || src >= (NR_COMBINERS * 32)) {
pic->output_to_irq[output] = IRQ_UNMAPPED;
return;
}
/* four mappings per mux register */
index = output / 4;
offset = (output & 3) * 8;
val = soc_readl(&pic->regs->intmux[index]);
val &= ~(0xff << offset);
val |= src << offset;
soc_writel(val, &pic->regs->intmux[index]);
}
/*
* Parse the MUX mapping, if one exists.
*
* The MUX map is an array of up to 12 cells; one for each usable core priority
* interrupt. The value of a given cell is the megamodule interrupt source
* which is to me MUXed to the output corresponding to the cell position
* withing the array. The first cell in the array corresponds to priority
* 4 and the last (12th) cell corresponds to priority 15. The allowed
* values are 4 - ((NR_COMBINERS * 32) - 1). Note that the combined interrupt
* sources (0 - 3) are not allowed to be mapped through this property. They
* are handled through the "interrupts" property. This allows us to use a
* value of zero as a "do not map" placeholder.
*/
static void __init parse_priority_map(struct megamod_pic *pic,
int *mapping, int size)
{
struct device_node *np = irq_domain_get_of_node(pic->irqhost);
const __be32 *map;
int i, maplen;
u32 val;
map = of_get_property(np, "ti,c64x+megamod-pic-mux", &maplen);
if (map) {
maplen /= 4;
if (maplen > size)
maplen = size;
for (i = 0; i < maplen; i++) {
val = be32_to_cpup(map);
if (val && val >= 4)
mapping[i] = val;
++map;
}
}
}
static struct megamod_pic * __init init_megamod_pic(struct device_node *np)
{
struct megamod_pic *pic;
int i, irq;
int mapping[NR_MUX_OUTPUTS];
pr_info("Initializing C64x+ Megamodule PIC\n");
pic = kzalloc(sizeof(struct megamod_pic), GFP_KERNEL);
if (!pic) {
pr_err("%pOF: Could not alloc PIC structure.\n", np);
return NULL;
}
pic->irqhost = irq_domain_add_linear(np, NR_COMBINERS * 32,
&megamod_domain_ops, pic);
if (!pic->irqhost) {
pr_err("%pOF: Could not alloc host.\n", np);
goto error_free;
}
pic->irqhost->host_data = pic;
raw_spin_lock_init(&pic->lock);
pic->regs = of_iomap(np, 0);
if (!pic->regs) {
pr_err("%pOF: Could not map registers.\n", np);
goto error_free;
}
/* Initialize MUX map */
for (i = 0; i < ARRAY_SIZE(mapping); i++)
mapping[i] = IRQ_UNMAPPED;
parse_priority_map(pic, mapping, ARRAY_SIZE(mapping));
/*
* We can have up to 12 interrupts cascading to the core controller.
* These cascades can be from the combined interrupt sources or for
* individual interrupt sources. The "interrupts" property only
* deals with the cascaded combined interrupts. The individual
* interrupts muxed to the core controller use the core controller
* as their interrupt parent.
*/
for (i = 0; i < NR_COMBINERS; i++) {
struct irq_data *irq_data;
irq_hw_number_t hwirq;
irq = irq_of_parse_and_map(np, i);
if (irq == NO_IRQ)
continue;
irq_data = irq_get_irq_data(irq);
if (!irq_data) {
pr_err("%pOF: combiner-%d no irq_data for virq %d!\n",
np, i, irq);
continue;
}
hwirq = irq_data->hwirq;
/*
* Check that device tree provided something in the range
* of the core priority interrupts (4 - 15).
*/
if (hwirq < 4 || hwirq >= NR_PRIORITY_IRQS) {
pr_err("%pOF: combiner-%d core irq %ld out of range!\n",
np, i, hwirq);
continue;
}
/* record the mapping */
mapping[hwirq - 4] = i;
pr_debug("%pOF: combiner-%d cascading to hwirq %ld\n",
np, i, hwirq);
cascade_data[i].pic = pic;
cascade_data[i].index = i;
/* mask and clear all events in combiner */
soc_writel(~0, &pic->regs->evtmask[i]);
soc_writel(~0, &pic->regs->evtclr[i]);
irq_set_chained_handler_and_data(irq, megamod_irq_cascade,
&cascade_data[i]);
}
/* Finally, set up the MUX registers */
for (i = 0; i < NR_MUX_OUTPUTS; i++) {
if (mapping[i] != IRQ_UNMAPPED) {
pr_debug("%pOF: setting mux %d to priority %d\n",
np, mapping[i], i + 4);
set_megamod_mux(pic, mapping[i], i);
}
}
return pic;
error_free:
kfree(pic);
return NULL;
}
/*
* Return next active event after ACK'ing it.
* Return -1 if no events active.
*/
static int get_exception(void)
{
int i, bit;
u32 mask;
for (i = 0; i < NR_COMBINERS; i++) {
mask = soc_readl(&mm_pic->regs->mexpflag[i]);
if (mask) {
bit = __ffs(mask);
soc_writel(1 << bit, &mm_pic->regs->evtclr[i]);
return (i * 32) + bit;
}
}
return -1;
}
static void assert_event(unsigned int val)
{
soc_writel(val, &mm_pic->regs->evtasrt);
}
void __init megamod_pic_init(void)
{
struct device_node *np;
np = of_find_compatible_node(NULL, NULL, "ti,c64x+megamod-pic");
if (!np)
return;
mm_pic = init_megamod_pic(np);
of_node_put(np);
soc_ops.get_exception = get_exception;
soc_ops.assert_event = assert_event;
return;
}
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Clock and PLL control for C64x+ devices
*
* Copyright (C) 2010, 2011 Texas Instruments.
* Contributed by: Mark Salter <msalter@redhat.com>
*
* Copied heavily from arm/mach-davinci/clock.c, so:
*
* Copyright (C) 2006-2007 Texas Instruments.
* Copyright (C) 2008-2009 Deep Root Systems, LLC
*/
#include <linux/module.h>
#include <linux/clkdev.h>
#include <linux/clk.h>
#include <linux/io.h>
#include <linux/err.h>
#include <asm/clock.h>
#include <asm/soc.h>
static LIST_HEAD(clocks);
static DEFINE_MUTEX(clocks_mutex);
static DEFINE_SPINLOCK(clockfw_lock);
static void __clk_enable(struct clk *clk)
{
if (clk->parent)
__clk_enable(clk->parent);
clk->usecount++;
}
static void __clk_disable(struct clk *clk)
{
if (WARN_ON(clk->usecount == 0))
return;
--clk->usecount;
if (clk->parent)
__clk_disable(clk->parent);
}
int clk_enable(struct clk *clk)
{
unsigned long flags;
if (clk == NULL || IS_ERR(clk))
return -EINVAL;
spin_lock_irqsave(&clockfw_lock, flags);
__clk_enable(clk);
spin_unlock_irqrestore(&clockfw_lock, flags);
return 0;
}
EXPORT_SYMBOL(clk_enable);
void clk_disable(struct clk *clk)
{
unsigned long flags;
if (clk == NULL || IS_ERR(clk))
return;
spin_lock_irqsave(&clockfw_lock, flags);
__clk_disable(clk);
spin_unlock_irqrestore(&clockfw_lock, flags);
}
EXPORT_SYMBOL(clk_disable);
unsigned long clk_get_rate(struct clk *clk)
{
if (clk == NULL || IS_ERR(clk))
return -EINVAL;
return clk->rate;
}
EXPORT_SYMBOL(clk_get_rate);
long clk_round_rate(struct clk *clk, unsigned long rate)
{
if (clk == NULL || IS_ERR(clk))
return -EINVAL;
if (clk->round_rate)
return clk->round_rate(clk, rate);
return clk->rate;
}
EXPORT_SYMBOL(clk_round_rate);
/* Propagate rate to children */
static void propagate_rate(struct clk *root)
{
struct clk *clk;
list_for_each_entry(clk, &root->children, childnode) {
if (clk->recalc)
clk->rate = clk->recalc(clk);
propagate_rate(clk);
}
}
int clk_set_rate(struct clk *clk, unsigned long rate)
{
unsigned long flags;
int ret = -EINVAL;
if (clk == NULL || IS_ERR(clk))
return ret;
if (clk->set_rate)
ret = clk->set_rate(clk, rate);
spin_lock_irqsave(&clockfw_lock, flags);
if (ret == 0) {
if (clk->recalc)
clk->rate = clk->recalc(clk);
propagate_rate(clk);
}
spin_unlock_irqrestore(&clockfw_lock, flags);
return ret;
}
EXPORT_SYMBOL(clk_set_rate);
int clk_set_parent(struct clk *clk, struct clk *parent)
{
unsigned long flags;
if (clk == NULL || IS_ERR(clk))
return -EINVAL;
/* Cannot change parent on enabled clock */
if (WARN_ON(clk->usecount))
return -EINVAL;
mutex_lock(&clocks_mutex);
clk->parent = parent;
list_del_init(&clk->childnode);
list_add(&clk->childnode, &clk->parent->children);
mutex_unlock(&clocks_mutex);
spin_lock_irqsave(&clockfw_lock, flags);
if (clk->recalc)
clk->rate = clk->recalc(clk);
propagate_rate(clk);
spin_unlock_irqrestore(&clockfw_lock, flags);
return 0;
}
EXPORT_SYMBOL(clk_set_parent);
int clk_register(struct clk *clk)
{
if (clk == NULL || IS_ERR(clk))
return -EINVAL;
if (WARN(clk->parent && !clk->parent->rate,
"CLK: %s parent %s has no rate!\n",
clk->name, clk->parent->name))
return -EINVAL;
mutex_lock(&clocks_mutex);
list_add_tail(&clk->node, &clocks);
if (clk->parent)
list_add_tail(&clk->childnode, &clk->parent->children);
mutex_unlock(&clocks_mutex);
/* If rate is already set, use it */
if (clk->rate)
return 0;
/* Else, see if there is a way to calculate it */
if (clk->recalc)
clk->rate = clk->recalc(clk);
/* Otherwise, default to parent rate */
else if (clk->parent)
clk->rate = clk->parent->rate;
return 0;
}
EXPORT_SYMBOL(clk_register);
void clk_unregister(struct clk *clk)
{
if (clk == NULL || IS_ERR(clk))
return;
mutex_lock(&clocks_mutex);
list_del(&clk->node);
list_del(&clk->childnode);
mutex_unlock(&clocks_mutex);
}
EXPORT_SYMBOL(clk_unregister);
static u32 pll_read(struct pll_data *pll, int reg)
{
return soc_readl(pll->base + reg);
}
static unsigned long clk_sysclk_recalc(struct clk *clk)
{
u32 v, plldiv = 0;
struct pll_data *pll;
unsigned long rate = clk->rate;
if (WARN_ON(!clk->parent))
return rate;
rate = clk->parent->rate;
/* the parent must be a PLL */
if (WARN_ON(!clk->parent->pll_data))
return rate;
pll = clk->parent->pll_data;
/* If pre-PLL, source clock is before the multiplier and divider(s) */
if (clk->flags & PRE_PLL)
rate = pll->input_rate;
if (!clk->div) {
pr_debug("%s: (no divider) rate = %lu KHz\n",
clk->name, rate / 1000);
return rate;
}
if (clk->flags & FIXED_DIV_PLL) {
rate /= clk->div;
pr_debug("%s: (fixed divide by %d) rate = %lu KHz\n",
clk->name, clk->div, rate / 1000);
return rate;
}
v = pll_read(pll, clk->div);
if (v & PLLDIV_EN)
plldiv = (v & PLLDIV_RATIO_MASK) + 1;
if (plldiv == 0)
plldiv = 1;
rate /= plldiv;
pr_debug("%s: (divide by %d) rate = %lu KHz\n",
clk->name, plldiv, rate / 1000);
return rate;
}
static unsigned long clk_leafclk_recalc(struct clk *clk)
{
if (WARN_ON(!clk->parent))
return clk->rate;
pr_debug("%s: (parent %s) rate = %lu KHz\n",
clk->name, clk->parent->name, clk->parent->rate / 1000);
return clk->parent->rate;
}
static unsigned long clk_pllclk_recalc(struct clk *clk)
{
u32 ctrl, mult = 0, prediv = 0, postdiv = 0;
u8 bypass;
struct pll_data *pll = clk->pll_data;
unsigned long rate = clk->rate;
if (clk->flags & FIXED_RATE_PLL)
return rate;
ctrl = pll_read(pll, PLLCTL);
rate = pll->input_rate = clk->parent->rate;
if (ctrl & PLLCTL_PLLEN)
bypass = 0;
else
bypass = 1;
if (pll->flags & PLL_HAS_MUL) {
mult = pll_read(pll, PLLM);
mult = (mult & PLLM_PLLM_MASK) + 1;
}
if (pll->flags & PLL_HAS_PRE) {
prediv = pll_read(pll, PLLPRE);
if (prediv & PLLDIV_EN)
prediv = (prediv & PLLDIV_RATIO_MASK) + 1;
else
prediv = 0;
}
if (pll->flags & PLL_HAS_POST) {
postdiv = pll_read(pll, PLLPOST);
if (postdiv & PLLDIV_EN)
postdiv = (postdiv & PLLDIV_RATIO_MASK) + 1;
else
postdiv = 1;
}
if (!bypass) {
if (prediv)
rate /= prediv;
if (mult)
rate *= mult;
if (postdiv)
rate /= postdiv;
pr_debug("PLL%d: input = %luMHz, pre[%d] mul[%d] post[%d] "
"--> %luMHz output.\n",
pll->num, clk->parent->rate / 1000000,
prediv, mult, postdiv, rate / 1000000);
} else
pr_debug("PLL%d: input = %luMHz, bypass mode.\n",
pll->num, clk->parent->rate / 1000000);
return rate;
}
static void __init __init_clk(struct clk *clk)
{
INIT_LIST_HEAD(&clk->node);
INIT_LIST_HEAD(&clk->children);
INIT_LIST_HEAD(&clk->childnode);
if (!clk->recalc) {
/* Check if clock is a PLL */
if (clk->pll_data)
clk->recalc = clk_pllclk_recalc;
/* Else, if it is a PLL-derived clock */
else if (clk->flags & CLK_PLL)
clk->recalc = clk_sysclk_recalc;
/* Otherwise, it is a leaf clock (PSC clock) */
else if (clk->parent)
clk->recalc = clk_leafclk_recalc;
}
}
void __init c6x_clks_init(struct clk_lookup *clocks)
{
struct clk_lookup *c;
struct clk *clk;
size_t num_clocks = 0;
for (c = clocks; c->clk; c++) {
clk = c->clk;
__init_clk(clk);
clk_register(clk);
num_clocks++;
/* Turn on clocks that Linux doesn't otherwise manage */
if (clk->flags & ALWAYS_ENABLED)
clk_enable(clk);
}
clkdev_add_table(clocks, num_clocks);
}
#ifdef CONFIG_DEBUG_FS
#include <linux/debugfs.h>
#include <linux/seq_file.h>
#define CLKNAME_MAX 10 /* longest clock name */
#define NEST_DELTA 2
#define NEST_MAX 4
static void
dump_clock(struct seq_file *s, unsigned nest, struct clk *parent)
{
char *state;
char buf[CLKNAME_MAX + NEST_DELTA * NEST_MAX];
struct clk *clk;
unsigned i;
if (parent->flags & CLK_PLL)
state = "pll";
else
state = "";
/* <nest spaces> name <pad to end> */
memset(buf, ' ', sizeof(buf) - 1);
buf[sizeof(buf) - 1] = 0;
i = strlen(parent->name);
memcpy(buf + nest, parent->name,
min(i, (unsigned)(sizeof(buf) - 1 - nest)));
seq_printf(s, "%s users=%2d %-3s %9ld Hz\n",
buf, parent->usecount, state, clk_get_rate(parent));
/* REVISIT show device associations too */
/* cost is now small, but not linear... */
list_for_each_entry(clk, &parent->children, childnode) {
dump_clock(s, nest + NEST_DELTA, clk);
}
}
static int c6x_ck_show(struct seq_file *m, void *v)
{
struct clk *clk;
/*
* Show clock tree; We trust nonzero usecounts equate to PSC enables...
*/
mutex_lock(&clocks_mutex);
list_for_each_entry(clk, &clocks, node)
if (!clk->parent)
dump_clock(m, 0, clk);
mutex_unlock(&clocks_mutex);
return 0;
}
static int c6x_ck_open(struct inode *inode, struct file *file)
{
return single_open(file, c6x_ck_show, NULL);
}
static const struct file_operations c6x_ck_operations = {
.open = c6x_ck_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int __init c6x_clk_debugfs_init(void)
{
debugfs_create_file("c6x_clocks", S_IFREG | S_IRUGO, NULL, NULL,
&c6x_ck_operations);
return 0;
}
device_initcall(c6x_clk_debugfs_init);
#endif /* CONFIG_DEBUG_FS */
// SPDX-License-Identifier: GPL-2.0-only
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2011 Texas Instruments Incorporated
* Author: Mark Salter <msalter@redhat.com>
*/
#include <linux/kernel.h>
#include <linux/delay.h>
#include <linux/errno.h>
#include <linux/string.h>
#include <linux/ioport.h>
#include <linux/clkdev.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <asm/clock.h>
#include <asm/setup.h>
#include <asm/special_insns.h>
#include <asm/irq.h>
/*
* Common SoC clock support.
*/
/* Default input for PLL1 */
struct clk clkin1 = {
.name = "clkin1",
.node = LIST_HEAD_INIT(clkin1.node),
.children = LIST_HEAD_INIT(clkin1.children),
.childnode = LIST_HEAD_INIT(clkin1.childnode),
};
struct pll_data c6x_soc_pll1 = {
.num = 1,
.sysclks = {
{
.name = "pll1",
.parent = &clkin1,
.pll_data = &c6x_soc_pll1,
.flags = CLK_PLL,
},
{
.name = "pll1_sysclk1",
.parent = &c6x_soc_pll1.sysclks[0],
.flags = CLK_PLL,
},
{
.name = "pll1_sysclk2",
.parent = &c6x_soc_pll1.sysclks[0],
.flags = CLK_PLL,
},
{
.name = "pll1_sysclk3",
.parent = &c6x_soc_pll1.sysclks[0],
.flags = CLK_PLL,
},
{
.name = "pll1_sysclk4",
.parent = &c6x_soc_pll1.sysclks[0],
.flags = CLK_PLL,
},
{
.name = "pll1_sysclk5",
.parent = &c6x_soc_pll1.sysclks[0],
.flags = CLK_PLL,
},
{
.name = "pll1_sysclk6",
.parent = &c6x_soc_pll1.sysclks[0],
.flags = CLK_PLL,
},
{
.name = "pll1_sysclk7",
.parent = &c6x_soc_pll1.sysclks[0],
.flags = CLK_PLL,
},
{
.name = "pll1_sysclk8",
.parent = &c6x_soc_pll1.sysclks[0],
.flags = CLK_PLL,
},
{
.name = "pll1_sysclk9",
.parent = &c6x_soc_pll1.sysclks[0],
.flags = CLK_PLL,
},
{
.name = "pll1_sysclk10",
.parent = &c6x_soc_pll1.sysclks[0],
.flags = CLK_PLL,
},
{
.name = "pll1_sysclk11",
.parent = &c6x_soc_pll1.sysclks[0],
.flags = CLK_PLL,
},
{
.name = "pll1_sysclk12",
.parent = &c6x_soc_pll1.sysclks[0],
.flags = CLK_PLL,
},
{
.name = "pll1_sysclk13",
.parent = &c6x_soc_pll1.sysclks[0],
.flags = CLK_PLL,
},
{
.name = "pll1_sysclk14",
.parent = &c6x_soc_pll1.sysclks[0],
.flags = CLK_PLL,
},
{
.name = "pll1_sysclk15",
.parent = &c6x_soc_pll1.sysclks[0],
.flags = CLK_PLL,
},
{
.name = "pll1_sysclk16",
.parent = &c6x_soc_pll1.sysclks[0],
.flags = CLK_PLL,
},
},
};
/* CPU core clock */
struct clk c6x_core_clk = {
.name = "core",
};
/* miscellaneous IO clocks */
struct clk c6x_i2c_clk = {
.name = "i2c",
};
struct clk c6x_watchdog_clk = {
.name = "watchdog",
};
struct clk c6x_mcbsp1_clk = {
.name = "mcbsp1",
};
struct clk c6x_mcbsp2_clk = {
.name = "mcbsp2",
};
struct clk c6x_mdio_clk = {
.name = "mdio",
};
#ifdef CONFIG_SOC_TMS320C6455
static struct clk_lookup c6455_clks[] = {
CLK(NULL, "pll1", &c6x_soc_pll1.sysclks[0]),
CLK(NULL, "pll1_sysclk2", &c6x_soc_pll1.sysclks[2]),
CLK(NULL, "pll1_sysclk3", &c6x_soc_pll1.sysclks[3]),
CLK(NULL, "pll1_sysclk4", &c6x_soc_pll1.sysclks[4]),
CLK(NULL, "pll1_sysclk5", &c6x_soc_pll1.sysclks[5]),
CLK(NULL, "core", &c6x_core_clk),
CLK("i2c_davinci.1", NULL, &c6x_i2c_clk),
CLK("watchdog", NULL, &c6x_watchdog_clk),
CLK("2c81800.mdio", NULL, &c6x_mdio_clk),
CLK("", NULL, NULL)
};
static void __init c6455_setup_clocks(struct device_node *node)
{
struct pll_data *pll = &c6x_soc_pll1;
struct clk *sysclks = pll->sysclks;
pll->flags = PLL_HAS_PRE | PLL_HAS_MUL;
sysclks[2].flags |= FIXED_DIV_PLL;
sysclks[2].div = 3;
sysclks[3].flags |= FIXED_DIV_PLL;
sysclks[3].div = 6;
sysclks[4].div = PLLDIV4;
sysclks[5].div = PLLDIV5;
c6x_core_clk.parent = &sysclks[0];
c6x_i2c_clk.parent = &sysclks[3];
c6x_watchdog_clk.parent = &sysclks[3];
c6x_mdio_clk.parent = &sysclks[3];
c6x_clks_init(c6455_clks);
}
#endif /* CONFIG_SOC_TMS320C6455 */
#ifdef CONFIG_SOC_TMS320C6457
static struct clk_lookup c6457_clks[] = {
CLK(NULL, "pll1", &c6x_soc_pll1.sysclks[0]),
CLK(NULL, "pll1_sysclk1", &c6x_soc_pll1.sysclks[1]),
CLK(NULL, "pll1_sysclk2", &c6x_soc_pll1.sysclks[2]),
CLK(NULL, "pll1_sysclk3", &c6x_soc_pll1.sysclks[3]),
CLK(NULL, "pll1_sysclk4", &c6x_soc_pll1.sysclks[4]),
CLK(NULL, "pll1_sysclk5", &c6x_soc_pll1.sysclks[5]),
CLK(NULL, "core", &c6x_core_clk),
CLK("i2c_davinci.1", NULL, &c6x_i2c_clk),
CLK("watchdog", NULL, &c6x_watchdog_clk),
CLK("2c81800.mdio", NULL, &c6x_mdio_clk),
CLK("", NULL, NULL)
};
static void __init c6457_setup_clocks(struct device_node *node)
{
struct pll_data *pll = &c6x_soc_pll1;
struct clk *sysclks = pll->sysclks;
pll->flags = PLL_HAS_MUL | PLL_HAS_POST;
sysclks[1].flags |= FIXED_DIV_PLL;
sysclks[1].div = 1;
sysclks[2].flags |= FIXED_DIV_PLL;
sysclks[2].div = 3;
sysclks[3].flags |= FIXED_DIV_PLL;
sysclks[3].div = 6;
sysclks[4].div = PLLDIV4;
sysclks[5].div = PLLDIV5;
c6x_core_clk.parent = &sysclks[1];
c6x_i2c_clk.parent = &sysclks[3];
c6x_watchdog_clk.parent = &sysclks[5];
c6x_mdio_clk.parent = &sysclks[5];
c6x_clks_init(c6457_clks);
}
#endif /* CONFIG_SOC_TMS320C6455 */
#ifdef CONFIG_SOC_TMS320C6472
static struct clk_lookup c6472_clks[] = {
CLK(NULL, "pll1", &c6x_soc_pll1.sysclks[0]),
CLK(NULL, "pll1_sysclk1", &c6x_soc_pll1.sysclks[1]),
CLK(NULL, "pll1_sysclk2", &c6x_soc_pll1.sysclks[2]),
CLK(NULL, "pll1_sysclk3", &c6x_soc_pll1.sysclks[3]),
CLK(NULL, "pll1_sysclk4", &c6x_soc_pll1.sysclks[4]),
CLK(NULL, "pll1_sysclk5", &c6x_soc_pll1.sysclks[5]),
CLK(NULL, "pll1_sysclk6", &c6x_soc_pll1.sysclks[6]),
CLK(NULL, "pll1_sysclk7", &c6x_soc_pll1.sysclks[7]),
CLK(NULL, "pll1_sysclk8", &c6x_soc_pll1.sysclks[8]),
CLK(NULL, "pll1_sysclk9", &c6x_soc_pll1.sysclks[9]),
CLK(NULL, "pll1_sysclk10", &c6x_soc_pll1.sysclks[10]),
CLK(NULL, "core", &c6x_core_clk),
CLK("i2c_davinci.1", NULL, &c6x_i2c_clk),
CLK("watchdog", NULL, &c6x_watchdog_clk),
CLK("2c81800.mdio", NULL, &c6x_mdio_clk),
CLK("", NULL, NULL)
};
/* assumptions used for delay loop calculations */
#define MIN_CLKIN1_KHz 15625
#define MAX_CORE_KHz 700000
#define MIN_PLLOUT_KHz MIN_CLKIN1_KHz
static void __init c6472_setup_clocks(struct device_node *node)
{
struct pll_data *pll = &c6x_soc_pll1;
struct clk *sysclks = pll->sysclks;
int i;
pll->flags = PLL_HAS_MUL;
for (i = 1; i <= 6; i++) {
sysclks[i].flags |= FIXED_DIV_PLL;
sysclks[i].div = 1;
}
sysclks[7].flags |= FIXED_DIV_PLL;
sysclks[7].div = 3;
sysclks[8].flags |= FIXED_DIV_PLL;
sysclks[8].div = 6;
sysclks[9].flags |= FIXED_DIV_PLL;
sysclks[9].div = 2;
sysclks[10].div = PLLDIV10;
c6x_core_clk.parent = &sysclks[get_coreid() + 1];
c6x_i2c_clk.parent = &sysclks[8];
c6x_watchdog_clk.parent = &sysclks[8];
c6x_mdio_clk.parent = &sysclks[5];
c6x_clks_init(c6472_clks);
}
#endif /* CONFIG_SOC_TMS320C6472 */
#ifdef CONFIG_SOC_TMS320C6474
static struct clk_lookup c6474_clks[] = {
CLK(NULL, "pll1", &c6x_soc_pll1.sysclks[0]),
CLK(NULL, "pll1_sysclk7", &c6x_soc_pll1.sysclks[7]),
CLK(NULL, "pll1_sysclk9", &c6x_soc_pll1.sysclks[9]),
CLK(NULL, "pll1_sysclk10", &c6x_soc_pll1.sysclks[10]),
CLK(NULL, "pll1_sysclk11", &c6x_soc_pll1.sysclks[11]),
CLK(NULL, "pll1_sysclk12", &c6x_soc_pll1.sysclks[12]),
CLK(NULL, "pll1_sysclk13", &c6x_soc_pll1.sysclks[13]),
CLK(NULL, "core", &c6x_core_clk),
CLK("i2c_davinci.1", NULL, &c6x_i2c_clk),
CLK("mcbsp.1", NULL, &c6x_mcbsp1_clk),
CLK("mcbsp.2", NULL, &c6x_mcbsp2_clk),
CLK("watchdog", NULL, &c6x_watchdog_clk),
CLK("2c81800.mdio", NULL, &c6x_mdio_clk),
CLK("", NULL, NULL)
};
static void __init c6474_setup_clocks(struct device_node *node)
{
struct pll_data *pll = &c6x_soc_pll1;
struct clk *sysclks = pll->sysclks;
pll->flags = PLL_HAS_MUL;
sysclks[7].flags |= FIXED_DIV_PLL;
sysclks[7].div = 1;
sysclks[9].flags |= FIXED_DIV_PLL;
sysclks[9].div = 3;
sysclks[10].flags |= FIXED_DIV_PLL;
sysclks[10].div = 6;
sysclks[11].div = PLLDIV11;
sysclks[12].flags |= FIXED_DIV_PLL;
sysclks[12].div = 2;
sysclks[13].div = PLLDIV13;
c6x_core_clk.parent = &sysclks[7];
c6x_i2c_clk.parent = &sysclks[10];
c6x_watchdog_clk.parent = &sysclks[10];
c6x_mcbsp1_clk.parent = &sysclks[10];
c6x_mcbsp2_clk.parent = &sysclks[10];
c6x_clks_init(c6474_clks);
}
#endif /* CONFIG_SOC_TMS320C6474 */
#ifdef CONFIG_SOC_TMS320C6678
static struct clk_lookup c6678_clks[] = {
CLK(NULL, "pll1", &c6x_soc_pll1.sysclks[0]),
CLK(NULL, "pll1_refclk", &c6x_soc_pll1.sysclks[1]),
CLK(NULL, "pll1_sysclk2", &c6x_soc_pll1.sysclks[2]),
CLK(NULL, "pll1_sysclk3", &c6x_soc_pll1.sysclks[3]),
CLK(NULL, "pll1_sysclk4", &c6x_soc_pll1.sysclks[4]),
CLK(NULL, "pll1_sysclk5", &c6x_soc_pll1.sysclks[5]),
CLK(NULL, "pll1_sysclk6", &c6x_soc_pll1.sysclks[6]),
CLK(NULL, "pll1_sysclk7", &c6x_soc_pll1.sysclks[7]),
CLK(NULL, "pll1_sysclk8", &c6x_soc_pll1.sysclks[8]),
CLK(NULL, "pll1_sysclk9", &c6x_soc_pll1.sysclks[9]),
CLK(NULL, "pll1_sysclk10", &c6x_soc_pll1.sysclks[10]),
CLK(NULL, "pll1_sysclk11", &c6x_soc_pll1.sysclks[11]),
CLK(NULL, "core", &c6x_core_clk),
CLK("", NULL, NULL)
};
static void __init c6678_setup_clocks(struct device_node *node)
{
struct pll_data *pll = &c6x_soc_pll1;
struct clk *sysclks = pll->sysclks;
pll->flags = PLL_HAS_MUL;
sysclks[1].flags |= FIXED_DIV_PLL;
sysclks[1].div = 1;
sysclks[2].div = PLLDIV2;
sysclks[3].flags |= FIXED_DIV_PLL;
sysclks[3].div = 2;
sysclks[4].flags |= FIXED_DIV_PLL;
sysclks[4].div = 3;
sysclks[5].div = PLLDIV5;
sysclks[6].flags |= FIXED_DIV_PLL;
sysclks[6].div = 64;
sysclks[7].flags |= FIXED_DIV_PLL;
sysclks[7].div = 6;
sysclks[8].div = PLLDIV8;
sysclks[9].flags |= FIXED_DIV_PLL;
sysclks[9].div = 12;
sysclks[10].flags |= FIXED_DIV_PLL;
sysclks[10].div = 3;
sysclks[11].flags |= FIXED_DIV_PLL;
sysclks[11].div = 6;
c6x_core_clk.parent = &sysclks[0];
c6x_i2c_clk.parent = &sysclks[7];
c6x_clks_init(c6678_clks);
}
#endif /* CONFIG_SOC_TMS320C6678 */
static struct of_device_id c6x_clkc_match[] __initdata = {
#ifdef CONFIG_SOC_TMS320C6455
{ .compatible = "ti,c6455-pll", .data = c6455_setup_clocks },
#endif
#ifdef CONFIG_SOC_TMS320C6457
{ .compatible = "ti,c6457-pll", .data = c6457_setup_clocks },
#endif
#ifdef CONFIG_SOC_TMS320C6472
{ .compatible = "ti,c6472-pll", .data = c6472_setup_clocks },
#endif
#ifdef CONFIG_SOC_TMS320C6474
{ .compatible = "ti,c6474-pll", .data = c6474_setup_clocks },
#endif
#ifdef CONFIG_SOC_TMS320C6678
{ .compatible = "ti,c6678-pll", .data = c6678_setup_clocks },
#endif
{ .compatible = "ti,c64x+pll" },
{}
};
void __init c64x_setup_clocks(void)
{
void (*__setup_clocks)(struct device_node *np);
struct pll_data *pll = &c6x_soc_pll1;
struct device_node *node;
const struct of_device_id *id;
int err;
u32 val;
node = of_find_matching_node(NULL, c6x_clkc_match);
if (!node)
return;
pll->base = of_iomap(node, 0);
if (!pll->base)
goto out;
err = of_property_read_u32(node, "clock-frequency", &val);
if (err || val == 0) {
pr_err("%pOF: no clock-frequency found! Using %dMHz\n",
node, (int)val / 1000000);
val = 25000000;
}
clkin1.rate = val;
err = of_property_read_u32(node, "ti,c64x+pll-bypass-delay", &val);
if (err)
val = 5000;
pll->bypass_delay = val;
err = of_property_read_u32(node, "ti,c64x+pll-reset-delay", &val);
if (err)
val = 30000;
pll->reset_delay = val;
err = of_property_read_u32(node, "ti,c64x+pll-lock-delay", &val);
if (err)
val = 30000;
pll->lock_delay = val;
/* id->data is a pointer to SoC-specific setup */
id = of_match_node(c6x_clkc_match, node);
if (id && id->data) {
__setup_clocks = id->data;
__setup_clocks(node);
}
out:
of_node_put(node);
}
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2010, 2011 Texas Instruments Incorporated
* Contributed by: Mark Salter (msalter@redhat.com)
*/
#include <linux/clockchips.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/of_address.h>
#include <asm/soc.h>
#include <asm/dscr.h>
#include <asm/special_insns.h>
#include <asm/timer64.h>
struct timer_regs {
u32 reserved0;
u32 emumgt;
u32 reserved1;
u32 reserved2;
u32 cntlo;
u32 cnthi;
u32 prdlo;
u32 prdhi;
u32 tcr;
u32 tgcr;
u32 wdtcr;
};
static struct timer_regs __iomem *timer;
#define TCR_TSTATLO 0x001
#define TCR_INVOUTPLO 0x002
#define TCR_INVINPLO 0x004
#define TCR_CPLO 0x008
#define TCR_ENAMODELO_ONCE 0x040
#define TCR_ENAMODELO_CONT 0x080
#define TCR_ENAMODELO_MASK 0x0c0
#define TCR_PWIDLO_MASK 0x030
#define TCR_CLKSRCLO 0x100
#define TCR_TIENLO 0x200
#define TCR_TSTATHI (0x001 << 16)
#define TCR_INVOUTPHI (0x002 << 16)
#define TCR_CPHI (0x008 << 16)
#define TCR_PWIDHI_MASK (0x030 << 16)
#define TCR_ENAMODEHI_ONCE (0x040 << 16)
#define TCR_ENAMODEHI_CONT (0x080 << 16)
#define TCR_ENAMODEHI_MASK (0x0c0 << 16)
#define TGCR_TIMLORS 0x001
#define TGCR_TIMHIRS 0x002
#define TGCR_TIMMODE_UD32 0x004
#define TGCR_TIMMODE_WDT64 0x008
#define TGCR_TIMMODE_CD32 0x00c
#define TGCR_TIMMODE_MASK 0x00c
#define TGCR_PSCHI_MASK (0x00f << 8)
#define TGCR_TDDRHI_MASK (0x00f << 12)
/*
* Timer clocks are divided down from the CPU clock
* The divisor is in the EMUMGTCLKSPD register
*/
#define TIMER_DIVISOR \
((soc_readl(&timer->emumgt) & (0xf << 16)) >> 16)
#define TIMER64_RATE (c6x_core_freq / TIMER_DIVISOR)
#define TIMER64_MODE_DISABLED 0
#define TIMER64_MODE_ONE_SHOT TCR_ENAMODELO_ONCE
#define TIMER64_MODE_PERIODIC TCR_ENAMODELO_CONT
static int timer64_mode;
static int timer64_devstate_id = -1;
static void timer64_config(unsigned long period)
{
u32 tcr = soc_readl(&timer->tcr) & ~TCR_ENAMODELO_MASK;
soc_writel(tcr, &timer->tcr);
soc_writel(period - 1, &timer->prdlo);
soc_writel(0, &timer->cntlo);
tcr |= timer64_mode;
soc_writel(tcr, &timer->tcr);
}
static void timer64_enable(void)
{
u32 val;
if (timer64_devstate_id >= 0)
dscr_set_devstate(timer64_devstate_id, DSCR_DEVSTATE_ENABLED);
/* disable timer, reset count */
soc_writel(soc_readl(&timer->tcr) & ~TCR_ENAMODELO_MASK, &timer->tcr);
soc_writel(0, &timer->prdlo);
/* use internal clock and 1 cycle pulse width */
val = soc_readl(&timer->tcr);
soc_writel(val & ~(TCR_CLKSRCLO | TCR_PWIDLO_MASK), &timer->tcr);
/* dual 32-bit unchained mode */
val = soc_readl(&timer->tgcr) & ~TGCR_TIMMODE_MASK;
soc_writel(val, &timer->tgcr);
soc_writel(val | (TGCR_TIMLORS | TGCR_TIMMODE_UD32), &timer->tgcr);
}
static void timer64_disable(void)
{
/* disable timer, reset count */
soc_writel(soc_readl(&timer->tcr) & ~TCR_ENAMODELO_MASK, &timer->tcr);
soc_writel(0, &timer->prdlo);
if (timer64_devstate_id >= 0)
dscr_set_devstate(timer64_devstate_id, DSCR_DEVSTATE_DISABLED);
}
static int next_event(unsigned long delta,
struct clock_event_device *evt)
{
timer64_config(delta);
return 0;
}
static int set_periodic(struct clock_event_device *evt)
{
timer64_enable();
timer64_mode = TIMER64_MODE_PERIODIC;
timer64_config(TIMER64_RATE / HZ);
return 0;
}
static int set_oneshot(struct clock_event_device *evt)
{
timer64_enable();
timer64_mode = TIMER64_MODE_ONE_SHOT;
return 0;
}
static int shutdown(struct clock_event_device *evt)
{
timer64_mode = TIMER64_MODE_DISABLED;
timer64_disable();
return 0;
}
static struct clock_event_device t64_clockevent_device = {
.name = "TIMER64_EVT32_TIMER",
.features = CLOCK_EVT_FEAT_ONESHOT |
CLOCK_EVT_FEAT_PERIODIC,
.rating = 200,
.set_state_shutdown = shutdown,
.set_state_periodic = set_periodic,
.set_state_oneshot = set_oneshot,
.set_next_event = next_event,
};
static irqreturn_t timer_interrupt(int irq, void *dev_id)
{
struct clock_event_device *cd = &t64_clockevent_device;
cd->event_handler(cd);
return IRQ_HANDLED;
}
void __init timer64_init(void)
{
struct clock_event_device *cd = &t64_clockevent_device;
struct device_node *np, *first = NULL;
u32 val;
int err, found = 0;
for_each_compatible_node(np, NULL, "ti,c64x+timer64") {
err = of_property_read_u32(np, "ti,core-mask", &val);
if (!err) {
if (val & (1 << get_coreid())) {
found = 1;
break;
}
} else if (!first)
first = np;
}
if (!found) {
/* try first one with no core-mask */
if (first)
np = of_node_get(first);
else {
pr_debug("Cannot find ti,c64x+timer64 timer.\n");
return;
}
}
timer = of_iomap(np, 0);
if (!timer) {
pr_debug("%pOF: Cannot map timer registers.\n", np);
goto out;
}
pr_debug("%pOF: Timer registers=%p.\n", np, timer);
cd->irq = irq_of_parse_and_map(np, 0);
if (cd->irq == NO_IRQ) {
pr_debug("%pOF: Cannot find interrupt.\n", np);
iounmap(timer);
goto out;
}
/* If there is a device state control, save the ID. */
err = of_property_read_u32(np, "ti,dscr-dev-enable", &val);
if (!err) {
timer64_devstate_id = val;
/*
* It is necessary to enable the timer block here because
* the TIMER_DIVISOR macro needs to read a timer register
* to get the divisor.
*/
dscr_set_devstate(timer64_devstate_id, DSCR_DEVSTATE_ENABLED);
}
pr_debug("%pOF: Timer irq=%d.\n", np, cd->irq);
clockevents_calc_mult_shift(cd, c6x_core_freq / TIMER_DIVISOR, 5);
cd->max_delta_ns = clockevent_delta2ns(0x7fffffff, cd);
cd->max_delta_ticks = 0x7fffffff;
cd->min_delta_ns = clockevent_delta2ns(250, cd);
cd->min_delta_ticks = 250;
cd->cpumask = cpumask_of(smp_processor_id());
clockevents_register_device(cd);
if (request_irq(cd->irq, timer_interrupt, IRQF_TIMER, "timer",
&t64_clockevent_device))
pr_err("Failed to request irq %d (timer)\n", cd->irq);
out:
of_node_put(np);
return;
}
...@@ -80,7 +80,7 @@ config MOXTET ...@@ -80,7 +80,7 @@ config MOXTET
config HISILICON_LPC config HISILICON_LPC
bool "Support for ISA I/O space on HiSilicon Hip06/7" bool "Support for ISA I/O space on HiSilicon Hip06/7"
depends on (ARM64 && ARCH_HISI) || (COMPILE_TEST && !ALPHA && !HEXAGON && !PARISC && !C6X) depends on (ARM64 && ARCH_HISI) || (COMPILE_TEST && !ALPHA && !HEXAGON && !PARISC)
depends on HAS_IOMEM depends on HAS_IOMEM
select INDIRECT_PIO if ARM64 select INDIRECT_PIO if ARM64
help help
......
...@@ -45,7 +45,7 @@ config ARCH_USE_GNU_PROPERTY ...@@ -45,7 +45,7 @@ config ARCH_USE_GNU_PROPERTY
config BINFMT_ELF_FDPIC config BINFMT_ELF_FDPIC
bool "Kernel support for FDPIC ELF binaries" bool "Kernel support for FDPIC ELF binaries"
default y if !BINFMT_ELF default y if !BINFMT_ELF
depends on (ARM || (SUPERH && !MMU) || C6X) depends on (ARM || (SUPERH && !MMU))
select ELFCORE select ELFCORE
help help
ELF FDPIC binaries are based on ELF, but allow the individual load ELF FDPIC binaries are based on ELF, but allow the individual load
......
...@@ -63,11 +63,7 @@ extern unsigned long memory_end; ...@@ -63,11 +63,7 @@ extern unsigned long memory_end;
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
#ifdef CONFIG_KERNEL_RAM_BASE_ADDRESS
#define PAGE_OFFSET (CONFIG_KERNEL_RAM_BASE_ADDRESS)
#else
#define PAGE_OFFSET (0) #define PAGE_OFFSET (0)
#endif
#ifndef ARCH_PFN_OFFSET #ifndef ARCH_PFN_OFFSET
#define ARCH_PFN_OFFSET (PAGE_OFFSET >> PAGE_SHIFT) #define ARCH_PFN_OFFSET (PAGE_OFFSET >> PAGE_SHIFT)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment