Commit c6f2f3e2 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'loongarch-5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic

Pull initial Loongarch architecture code from Arnd Bergmann:
 "This is the majority of the loongarch architecture code, including the
  final system call interface and all core functionality.

  It still misses three sets of peripheral but vital patches to add
  support for other subsystems, which have yet to pass review:

   - The drivers/firmware/efi stub for booting from a standard UEFI
     firmware implementation. Both the original custom boot interface
     and a draft implementation of the EFI stub did not make it, so it
     is currently impossible to boot the kernel, until the loongarch
     specific portions get accepted into the UEFI subsystem

   - The drivers/irqchip/irq-loongson-*.c drivers are shared with the
     the MIPS port, but currently lack support for ACPI based booting,
     which will get merged through the irqchip subsystem.

   - Similarly, the drivers/pci/controller/pci-loongson.c needs to be
     modified for ACPI support, which will be merged through the PCI
     subsystem.

  While the port cannot actually be used before all the above are
  merged, having it in 5.19 helps to establish the user space ABI for
  the libc ports to build on, and to help any treewide changes in the
  mainline kernel get applied here as well.

  A gcc-12 based tool chains for build testing is now included in

    https://mirrors.edge.kernel.org/pub/tools/crosstool/"

Original description from Huacai Chen:
 "LoongArch is a new RISC ISA, which is a bit like MIPS or RISC-V.
  LoongArch includes a reduced 32-bit version (LA32R), a standard 32-bit
  version (LA32S) and a 64-bit version (LA64). LoongArch use ACPI as its
  boot protocol LoongArch-specific interrupt controllers (similar to APIC)
  are already added in the next revision of ACPI Specification (current
  revision is 6.4).

  This patchset is adding basic LoongArch support in mainline kernel, we
  can see a complete snapshot here:

    https://github.com/loongson/linux/tree/loongarch-next
    https://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson.git/log/?h=loongarch-next

  Cross-compile tool chain to build kernel:

    https://github.com/loongson/build-tools/releases/download/2021.12.21/loongarch64-clfs-2022-03-03-cross-tools-gcc-glibc.tar.xz

  A CLFS-based Linux distro:

    https://github.com/loongson/build-tools/releases/download/2021.12.21/loongarch64-clfs-system-2022-03-03.tar.bz2

  Open-source tool chain which is under review (Binutils and Gcc are already upstream):

    https://github.com/loongson/binutils-gdb/tree/upstream_v3.1
    https://github.com/loongson/gcc/tree/loongarch_upstream_v6.3
    https://github.com/loongson/glibc/tree/loongarch_2_35_dev_v2.2

  Loongson and LoongArch documentations:

    https://github.com/loongson/LoongArch-Documentation

  LoongArch-specific interrupt controllers:

    https://mantis.uefi.org/mantis/view.php?id=2203
    https://mantis.uefi.org/mantis/view.php?id=2313"

Link: https://lore.kernel.org/lkml/20220603072053.35005-1-chenhuacai@loongson.cn/

* tag 'loongarch-5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic: (24 commits)
  MAINTAINERS: Add maintainer information for LoongArch
  LoongArch: Add Loongson-3 default config file
  LoongArch: Add Non-Uniform Memory Access (NUMA) support
  LoongArch: Add multi-processor (SMP) support
  LoongArch: Add VDSO and VSYSCALL support
  LoongArch: Add some library functions
  LoongArch: Add misc common routines
  LoongArch: Add ELF and module support
  LoongArch: Add signal handling support
  LoongArch: Add system call support
  LoongArch: Add memory management
  LoongArch: Add process management
  LoongArch: Add exception/interrupt handling
  LoongArch: Add boot and setup routines
  LoongArch: Add other common headers
  LoongArch: Add atomic/locking headers
  LoongArch: Add CPU definition headers
  LoongArch: Add build infrastructure
  LoongArch: Add writecombine support for drm
  LoongArch: Add ELF-related definitions
  ...
parents 21873bd6 8be44931
......@@ -13,6 +13,7 @@ implementation.
arm/index
arm64/index
ia64/index
loongarch/index
m68k/index
mips/index
nios2/index
......
.. SPDX-License-Identifier: GPL-2.0
.. kernel-feat:: $srctree/Documentation/features loongarch
.. SPDX-License-Identifier: GPL-2.0
======================
LoongArch Architecture
======================
.. toctree::
:maxdepth: 2
:numbered:
introduction
irq-chip-model
features
.. only:: subproject and html
Indices
=======
* :ref:`genindex`
This diff is collapsed.
.. SPDX-License-Identifier: GPL-2.0
=======================================
IRQ chip model (hierarchy) of LoongArch
=======================================
Currently, LoongArch based processors (e.g. Loongson-3A5000) can only work together
with LS7A chipsets. The irq chips in LoongArch computers include CPUINTC (CPU Core
Interrupt Controller), LIOINTC (Legacy I/O Interrupt Controller), EIOINTC (Extended
I/O Interrupt Controller), HTVECINTC (Hyper-Transport Vector Interrupt Controller),
PCH-PIC (Main Interrupt Controller in LS7A chipset), PCH-LPC (LPC Interrupt Controller
in LS7A chipset) and PCH-MSI (MSI Interrupt Controller).
CPUINTC is a per-core controller (in CPU), LIOINTC/EIOINTC/HTVECINTC are per-package
controllers (in CPU), while PCH-PIC/PCH-LPC/PCH-MSI are controllers out of CPU (i.e.,
in chipsets). These controllers (in other words, irqchips) are linked in a hierarchy,
and there are two models of hierarchy (legacy model and extended model).
Legacy IRQ model
================
In this model, IPI (Inter-Processor Interrupt) and CPU Local Timer interrupt go
to CPUINTC directly, CPU UARTS interrupts go to LIOINTC, while all other devices
interrupts go to PCH-PIC/PCH-LPC/PCH-MSI and gathered by HTVECINTC, and then go
to LIOINTC, and then CPUINTC::
+-----+ +---------+ +-------+
| IPI | --> | CPUINTC | <-- | Timer |
+-----+ +---------+ +-------+
^
|
+---------+ +-------+
| LIOINTC | <-- | UARTs |
+---------+ +-------+
^
|
+-----------+
| HTVECINTC |
+-----------+
^ ^
| |
+---------+ +---------+
| PCH-PIC | | PCH-MSI |
+---------+ +---------+
^ ^ ^
| | |
+---------+ +---------+ +---------+
| PCH-LPC | | Devices | | Devices |
+---------+ +---------+ +---------+
^
|
+---------+
| Devices |
+---------+
Extended IRQ model
==================
In this model, IPI (Inter-Processor Interrupt) and CPU Local Timer interrupt go
to CPUINTC directly, CPU UARTS interrupts go to LIOINTC, while all other devices
interrupts go to PCH-PIC/PCH-LPC/PCH-MSI and gathered by EIOINTC, and then go to
to CPUINTC directly::
+-----+ +---------+ +-------+
| IPI | --> | CPUINTC | <-- | Timer |
+-----+ +---------+ +-------+
^ ^
| |
+---------+ +---------+ +-------+
| EIOINTC | | LIOINTC | <-- | UARTs |
+---------+ +---------+ +-------+
^ ^
| |
+---------+ +---------+
| PCH-PIC | | PCH-MSI |
+---------+ +---------+
^ ^ ^
| | |
+---------+ +---------+ +---------+
| PCH-LPC | | Devices | | Devices |
+---------+ +---------+ +---------+
^
|
+---------+
| Devices |
+---------+
ACPI-related definitions
========================
CPUINTC::
ACPI_MADT_TYPE_CORE_PIC;
struct acpi_madt_core_pic;
enum acpi_madt_core_pic_version;
LIOINTC::
ACPI_MADT_TYPE_LIO_PIC;
struct acpi_madt_lio_pic;
enum acpi_madt_lio_pic_version;
EIOINTC::
ACPI_MADT_TYPE_EIO_PIC;
struct acpi_madt_eio_pic;
enum acpi_madt_eio_pic_version;
HTVECINTC::
ACPI_MADT_TYPE_HT_PIC;
struct acpi_madt_ht_pic;
enum acpi_madt_ht_pic_version;
PCH-PIC::
ACPI_MADT_TYPE_BIO_PIC;
struct acpi_madt_bio_pic;
enum acpi_madt_bio_pic_version;
PCH-MSI::
ACPI_MADT_TYPE_MSI_PIC;
struct acpi_madt_msi_pic;
enum acpi_madt_msi_pic_version;
PCH-LPC::
ACPI_MADT_TYPE_LPC_PIC;
struct acpi_madt_lpc_pic;
enum acpi_madt_lpc_pic_version;
References
==========
Documentation of Loongson-3A5000:
https://github.com/loongson/LoongArch-Documentation/releases/latest/download/Loongson-3A5000-usermanual-1.02-CN.pdf (in Chinese)
https://github.com/loongson/LoongArch-Documentation/releases/latest/download/Loongson-3A5000-usermanual-1.02-EN.pdf (in English)
Documentation of Loongson's LS7A chipset:
https://github.com/loongson/LoongArch-Documentation/releases/latest/download/Loongson-7A1000-usermanual-2.00-CN.pdf (in Chinese)
https://github.com/loongson/LoongArch-Documentation/releases/latest/download/Loongson-7A1000-usermanual-2.00-EN.pdf (in English)
Note: CPUINTC is CSR.ECFG/CSR.ESTAT and its interrupt controller described
in Section 7.4 of "LoongArch Reference Manual, Vol 1"; LIOINTC is "Legacy I/O
Interrupts" described in Section 11.1 of "Loongson 3A5000 Processor Reference
Manual"; EIOINTC is "Extended I/O Interrupts" described in Section 11.2 of
"Loongson 3A5000 Processor Reference Manual"; HTVECINTC is "HyperTransport
Interrupts" described in Section 14.3 of "Loongson 3A5000 Processor Reference
Manual"; PCH-PIC/PCH-MSI is "Interrupt Controller" described in Section 5 of
"Loongson 7A1000 Bridge User Manual"; PCH-LPC is "LPC Interrupts" described in
Section 24.3 of "Loongson 7A1000 Bridge User Manual".
......@@ -171,6 +171,7 @@ TODOList:
riscv/index
openrisc/index
parisc/index
loongarch/index
TODOList:
......
.. SPDX-License-Identifier: GPL-2.0
.. include:: ../disclaimer-zh_CN.rst
:Original: Documentation/loongarch/features.rst
:Translator: Huacai Chen <chenhuacai@loongson.cn>
.. kernel-feat:: $srctree/Documentation/features loongarch
.. SPDX-License-Identifier: GPL-2.0
.. include:: ../disclaimer-zh_CN.rst
:Original: Documentation/loongarch/index.rst
:Translator: Huacai Chen <chenhuacai@loongson.cn>
=================
LoongArch体系结构
=================
.. toctree::
:maxdepth: 2
:numbered:
introduction
irq-chip-model
features
.. only:: subproject and html
Indices
=======
* :ref:`genindex`
This diff is collapsed.
.. SPDX-License-Identifier: GPL-2.0
.. include:: ../disclaimer-zh_CN.rst
:Original: Documentation/loongarch/irq-chip-model.rst
:Translator: Huacai Chen <chenhuacai@loongson.cn>
==================================
LoongArch的IRQ芯片模型(层级关系)
==================================
目前,基于LoongArch的处理器(如龙芯3A5000)只能与LS7A芯片组配合工作。LoongArch计算机
中的中断控制器(即IRQ芯片)包括CPUINTC(CPU Core Interrupt Controller)、LIOINTC(
Legacy I/O Interrupt Controller)、EIOINTC(Extended I/O Interrupt Controller)、
HTVECINTC(Hyper-Transport Vector Interrupt Controller)、PCH-PIC(LS7A芯片组的主中
断控制器)、PCH-LPC(LS7A芯片组的LPC中断控制器)和PCH-MSI(MSI中断控制器)。
CPUINTC是一种CPU内部的每个核本地的中断控制器,LIOINTC/EIOINTC/HTVECINTC是CPU内部的
全局中断控制器(每个芯片一个,所有核共享),而PCH-PIC/PCH-LPC/PCH-MSI是CPU外部的中
断控制器(在配套芯片组里面)。这些中断控制器(或者说IRQ芯片)以一种层次树的组织形式
级联在一起,一共有两种层级关系模型(传统IRQ模型和扩展IRQ模型)。
传统IRQ模型
===========
在这种模型里面,IPI(Inter-Processor Interrupt)和CPU本地时钟中断直接发送到CPUINTC,
CPU串口(UARTs)中断发送到LIOINTC,而其他所有设备的中断则分别发送到所连接的PCH-PIC/
PCH-LPC/PCH-MSI,然后被HTVECINTC统一收集,再发送到LIOINTC,最后到达CPUINTC::
+-----+ +---------+ +-------+
| IPI | --> | CPUINTC | <-- | Timer |
+-----+ +---------+ +-------+
^
|
+---------+ +-------+
| LIOINTC | <-- | UARTs |
+---------+ +-------+
^
|
+-----------+
| HTVECINTC |
+-----------+
^ ^
| |
+---------+ +---------+
| PCH-PIC | | PCH-MSI |
+---------+ +---------+
^ ^ ^
| | |
+---------+ +---------+ +---------+
| PCH-LPC | | Devices | | Devices |
+---------+ +---------+ +---------+
^
|
+---------+
| Devices |
+---------+
扩展IRQ模型
===========
在这种模型里面,IPI(Inter-Processor Interrupt)和CPU本地时钟中断直接发送到CPUINTC,
CPU串口(UARTs)中断发送到LIOINTC,而其他所有设备的中断则分别发送到所连接的PCH-PIC/
PCH-LPC/PCH-MSI,然后被EIOINTC统一收集,再直接到达CPUINTC::
+-----+ +---------+ +-------+
| IPI | --> | CPUINTC | <-- | Timer |
+-----+ +---------+ +-------+
^ ^
| |
+---------+ +---------+ +-------+
| EIOINTC | | LIOINTC | <-- | UARTs |
+---------+ +---------+ +-------+
^ ^
| |
+---------+ +---------+
| PCH-PIC | | PCH-MSI |
+---------+ +---------+
^ ^ ^
| | |
+---------+ +---------+ +---------+
| PCH-LPC | | Devices | | Devices |
+---------+ +---------+ +---------+
^
|
+---------+
| Devices |
+---------+
ACPI相关的定义
==============
CPUINTC::
ACPI_MADT_TYPE_CORE_PIC;
struct acpi_madt_core_pic;
enum acpi_madt_core_pic_version;
LIOINTC::
ACPI_MADT_TYPE_LIO_PIC;
struct acpi_madt_lio_pic;
enum acpi_madt_lio_pic_version;
EIOINTC::
ACPI_MADT_TYPE_EIO_PIC;
struct acpi_madt_eio_pic;
enum acpi_madt_eio_pic_version;
HTVECINTC::
ACPI_MADT_TYPE_HT_PIC;
struct acpi_madt_ht_pic;
enum acpi_madt_ht_pic_version;
PCH-PIC::
ACPI_MADT_TYPE_BIO_PIC;
struct acpi_madt_bio_pic;
enum acpi_madt_bio_pic_version;
PCH-MSI::
ACPI_MADT_TYPE_MSI_PIC;
struct acpi_madt_msi_pic;
enum acpi_madt_msi_pic_version;
PCH-LPC::
ACPI_MADT_TYPE_LPC_PIC;
struct acpi_madt_lpc_pic;
enum acpi_madt_lpc_pic_version;
参考文献
========
龙芯3A5000的文档:
https://github.com/loongson/LoongArch-Documentation/releases/latest/download/Loongson-3A5000-usermanual-1.02-CN.pdf (中文版)
https://github.com/loongson/LoongArch-Documentation/releases/latest/download/Loongson-3A5000-usermanual-1.02-EN.pdf (英文版)
龙芯LS7A芯片组的文档:
https://github.com/loongson/LoongArch-Documentation/releases/latest/download/Loongson-7A1000-usermanual-2.00-CN.pdf (中文版)
https://github.com/loongson/LoongArch-Documentation/releases/latest/download/Loongson-7A1000-usermanual-2.00-EN.pdf (英文版)
注:CPUINTC即《龙芯架构参考手册卷一》第7.4节所描述的CSR.ECFG/CSR.ESTAT寄存器及其中断
控制逻辑;LIOINTC即《龙芯3A5000处理器使用手册》第11.1节所描述的“传统I/O中断”;EIOINTC
即《龙芯3A5000处理器使用手册》第11.2节所描述的“扩展I/O中断”;HTVECINTC即《龙芯3A5000
处理器使用手册》第14.3节所描述的“HyperTransport中断”;PCH-PIC/PCH-MSI即《龙芯7A1000桥
片用户手册》第5章所描述的“中断控制器”;PCH-LPC即《龙芯7A1000桥片用户手册》第24.3节所
描述的“LPC中断”。
......@@ -11565,6 +11565,16 @@ S: Maintained
F: Documentation/devicetree/bindings/display/bridge/lontium,lt8912b.yaml
F: drivers/gpu/drm/bridge/lontium-lt8912b.c
LOONGARCH
M: Huacai Chen <chenhuacai@kernel.org>
R: WANG Xuerui <kernel@xen0n.name>
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson.git
F: arch/loongarch/
F: drivers/*/*loongarch*
F: Documentation/loongarch/
F: Documentation/translations/zh_CN/loongarch/
LSILOGIC MPT FUSION DRIVERS (FC/SAS/SPI)
M: Sathya Prakash <sathya.prakash@broadcom.com>
M: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
......
obj-y += kernel/
obj-y += mm/
obj-y += vdso/
# for cleaning
subdir- += boot
This diff is collapsed.
# SPDX-License-Identifier: GPL-2.0
#
# Author: Huacai Chen <chenhuacai@loongson.cn>
# Copyright (C) 2020-2022 Loongson Technology Corporation Limited
boot := arch/loongarch/boot
KBUILD_DEFCONFIG := loongson3_defconfig
KBUILD_IMAGE = $(boot)/vmlinux
#
# Select the object file format to substitute into the linker script.
#
64bit-tool-archpref = loongarch64
32bit-bfd = elf32-loongarch
64bit-bfd = elf64-loongarch
32bit-emul = elf32loongarch
64bit-emul = elf64loongarch
ifdef CONFIG_64BIT
tool-archpref = $(64bit-tool-archpref)
UTS_MACHINE := loongarch64
endif
ifneq ($(SUBARCH),$(ARCH))
ifeq ($(CROSS_COMPILE),)
CROSS_COMPILE := $(call cc-cross-prefix, $(tool-archpref)-linux- $(tool-archpref)-linux-gnu- $(tool-archpref)-unknown-linux-gnu-)
endif
endif
ifdef CONFIG_64BIT
ld-emul = $(64bit-emul)
cflags-y += -mabi=lp64s
endif
cflags-y += -G0 -pipe -msoft-float
LDFLAGS_vmlinux += -G0 -static -n -nostdlib
KBUILD_AFLAGS_KERNEL += -Wa,-mla-global-with-pcrel
KBUILD_CFLAGS_KERNEL += -Wa,-mla-global-with-pcrel
KBUILD_AFLAGS_MODULE += -Wa,-mla-global-with-abs
KBUILD_CFLAGS_MODULE += -fplt -Wa,-mla-global-with-abs,-mla-local-with-abs
cflags-y += -ffreestanding
cflags-y += $(call cc-option, -mno-check-zero-division)
load-y = 0x9000000000200000
bootvars-y = VMLINUX_LOAD_ADDRESS=$(load-y)
KBUILD_AFLAGS += $(cflags-y)
KBUILD_CFLAGS += $(cflags-y)
KBUILD_CPPFLAGS += -DVMLINUX_LOAD_ADDRESS=$(load-y)
# This is required to get dwarf unwinding tables into .debug_frame
# instead of .eh_frame so we don't discard them.
KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
# Don't emit unaligned accesses.
# Not all LoongArch cores support unaligned access, and as kernel we can't
# rely on others to provide emulation for these accesses.
KBUILD_CFLAGS += $(call cc-option,-mstrict-align)
KBUILD_CFLAGS += -isystem $(shell $(CC) -print-file-name=include)
KBUILD_LDFLAGS += -m $(ld-emul)
ifdef CONFIG_LOONGARCH
CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \
egrep -vw '__GNUC_(MINOR_|PATCHLEVEL_)?_' | \
sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/" -e 's/\$$/&&/g')
endif
head-y := arch/loongarch/kernel/head.o
libs-y += arch/loongarch/lib/
ifeq ($(KBUILD_EXTMOD),)
prepare: vdso_prepare
vdso_prepare: prepare0
$(Q)$(MAKE) $(build)=arch/loongarch/vdso include/generated/vdso-offsets.h
endif
PHONY += vdso_install
vdso_install:
$(Q)$(MAKE) $(build)=arch/loongarch/vdso $@
all: $(KBUILD_IMAGE)
$(KBUILD_IMAGE): vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(bootvars-y) $@
install:
$(Q)install -D -m 755 $(KBUILD_IMAGE) $(INSTALL_PATH)/vmlinux-$(KERNELRELEASE)
$(Q)install -D -m 644 .config $(INSTALL_PATH)/config-$(KERNELRELEASE)
$(Q)install -D -m 644 System.map $(INSTALL_PATH)/System.map-$(KERNELRELEASE)
define archhelp
echo ' install - install kernel into $(INSTALL_PATH)'
echo
endef
# SPDX-License-Identifier: GPL-2.0-only
vmlinux*
#
# arch/loongarch/boot/Makefile
#
# Copyright (C) 2020-2022 Loongson Technology Corporation Limited
#
drop-sections := .comment .note .options .note.gnu.build-id
strip-flags := $(addprefix --remove-section=,$(drop-sections)) -S
OBJCOPYFLAGS_vmlinux.efi := -O binary $(strip-flags)
targets := vmlinux
quiet_cmd_strip = STRIP $@
cmd_strip = $(STRIP) -s -o $@ $<
$(obj)/vmlinux: vmlinux FORCE
$(call if_changed,strip)
# SPDX-License-Identifier: GPL-2.0-only
dtstree := $(srctree)/$(src)
dtb-y := $(patsubst $(dtstree)/%.dts,%.dtb, $(wildcard $(dtstree)/*.dts))
This diff is collapsed.
# SPDX-License-Identifier: GPL-2.0
generic-y += dma-contiguous.h
generic-y += export.h
generic-y += parport.h
generic-y += early_ioremap.h
generic-y += qrwlock.h
generic-y += qrwlock_types.h
generic-y += spinlock.h
generic-y += spinlock_types.h
generic-y += rwsem.h
generic-y += segment.h
generic-y += user.h
generic-y += stat.h
generic-y += fcntl.h
generic-y += ioctl.h
generic-y += ioctls.h
generic-y += mman.h
generic-y += msgbuf.h
generic-y += sembuf.h
generic-y += shmbuf.h
generic-y += statfs.h
generic-y += socket.h
generic-y += sockios.h
generic-y += termios.h
generic-y += termbits.h
generic-y += poll.h
generic-y += param.h
generic-y += posix_types.h
generic-y += resource.h
generic-y += kvm_para.h
/* SPDX-License-Identifier: GPL-2.0 */
/*
* LoongArch specific ACPICA environments and implementation
*
* Author: Jianmin Lv <lvjianmin@loongson.cn>
* Huacai Chen <chenhuacai@loongson.cn>
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_LOONGARCH_ACENV_H
#define _ASM_LOONGARCH_ACENV_H
/*
* This header is required by ACPI core, but we have nothing to fill in
* right now. Will be updated later when needed.
*/
#endif /* _ASM_LOONGARCH_ACENV_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Author: Jianmin Lv <lvjianmin@loongson.cn>
* Huacai Chen <chenhuacai@loongson.cn>
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_LOONGARCH_ACPI_H
#define _ASM_LOONGARCH_ACPI_H
#ifdef CONFIG_ACPI
extern int acpi_strict;
extern int acpi_disabled;
extern int acpi_pci_disabled;
extern int acpi_noirq;
#define acpi_os_ioremap acpi_os_ioremap
void __init __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size);
static inline void disable_acpi(void)
{
acpi_disabled = 1;
acpi_pci_disabled = 1;
acpi_noirq = 1;
}
static inline bool acpi_has_cpu_in_madt(void)
{
return true;
}
extern struct list_head acpi_wakeup_device_list;
#endif /* !CONFIG_ACPI */
#define ACPI_TABLE_UPGRADE_MAX_PHYS ARCH_LOW_ADDRESS_LIMIT
#endif /* _ASM_LOONGARCH_ACPI_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*
* Derived from MIPS:
* Copyright (C) 1996, 99 Ralf Baechle
* Copyright (C) 2000, 2002 Maciej W. Rozycki
* Copyright (C) 1990, 1999 by Silicon Graphics, Inc.
*/
#ifndef _ASM_ADDRSPACE_H
#define _ASM_ADDRSPACE_H
#include <linux/const.h>
#include <asm/loongarch.h>
/*
* This gives the physical RAM offset.
*/
#ifndef __ASSEMBLY__
#ifndef PHYS_OFFSET
#define PHYS_OFFSET _AC(0, UL)
#endif
extern unsigned long vm_map_base;
#endif /* __ASSEMBLY__ */
#ifndef IO_BASE
#define IO_BASE CSR_DMW0_BASE
#endif
#ifndef CACHE_BASE
#define CACHE_BASE CSR_DMW1_BASE
#endif
#ifndef UNCACHE_BASE
#define UNCACHE_BASE CSR_DMW0_BASE
#endif
#define DMW_PABITS 48
#define TO_PHYS_MASK ((1ULL << DMW_PABITS) - 1)
/*
* Memory above this physical address will be considered highmem.
*/
#ifndef HIGHMEM_START
#define HIGHMEM_START (_AC(1, UL) << _AC(DMW_PABITS, UL))
#endif
#define TO_PHYS(x) ( ((x) & TO_PHYS_MASK))
#define TO_CACHE(x) (CACHE_BASE | ((x) & TO_PHYS_MASK))
#define TO_UNCACHE(x) (UNCACHE_BASE | ((x) & TO_PHYS_MASK))
/*
* This handles the memory map.
*/
#ifndef PAGE_OFFSET
#define PAGE_OFFSET (CACHE_BASE + PHYS_OFFSET)
#endif
#ifndef FIXADDR_TOP
#define FIXADDR_TOP ((unsigned long)(long)(int)0xfffe0000)
#endif
#ifdef __ASSEMBLY__
#define _ATYPE_
#define _ATYPE32_
#define _ATYPE64_
#define _CONST64_(x) x
#else
#define _ATYPE_ __PTRDIFF_TYPE__
#define _ATYPE32_ int
#define _ATYPE64_ __s64
#ifdef CONFIG_64BIT
#define _CONST64_(x) x ## L
#else
#define _CONST64_(x) x ## LL
#endif
#endif
/*
* 32/64-bit LoongArch address spaces
*/
#ifdef __ASSEMBLY__
#define _ACAST32_
#define _ACAST64_
#else
#define _ACAST32_ (_ATYPE_)(_ATYPE32_) /* widen if necessary */
#define _ACAST64_ (_ATYPE64_) /* do _not_ narrow */
#endif
#ifdef CONFIG_32BIT
#define UVRANGE 0x00000000
#define KPRANGE0 0x80000000
#define KPRANGE1 0xa0000000
#define KVRANGE 0xc0000000
#else
#define XUVRANGE _CONST64_(0x0000000000000000)
#define XSPRANGE _CONST64_(0x4000000000000000)
#define XKPRANGE _CONST64_(0x8000000000000000)
#define XKVRANGE _CONST64_(0xc000000000000000)
#endif
/*
* Returns the physical address of a KPRANGEx / XKPRANGE address
*/
#define PHYSADDR(a) ((_ACAST64_(a)) & TO_PHYS_MASK)
#endif /* _ASM_ADDRSPACE_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#include <generated/asm-offsets.h>
/* SPDX-License-Identifier: GPL-2.0 */
#include <linux/uaccess.h>
#include <asm/fpu.h>
#include <asm/mmu_context.h>
#include <asm/page.h>
#include <asm/ftrace.h>
#include <asm-generic/asm-prototypes.h>
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Some useful macros for LoongArch assembler code
*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*
* Derived from MIPS:
* Copyright (C) 1995, 1996, 1997, 1999, 2001 by Ralf Baechle
* Copyright (C) 1999 by Silicon Graphics, Inc.
* Copyright (C) 2001 MIPS Technologies, Inc.
* Copyright (C) 2002 Maciej W. Rozycki
*/
#ifndef __ASM_ASM_H
#define __ASM_ASM_H
/* LoongArch pref instruction. */
#ifdef CONFIG_CPU_HAS_PREFETCH
#define PREF(hint, addr, offs) \
preld hint, addr, offs; \
#define PREFX(hint, addr, index) \
preldx hint, addr, index; \
#else /* !CONFIG_CPU_HAS_PREFETCH */
#define PREF(hint, addr, offs)
#define PREFX(hint, addr, index)
#endif /* !CONFIG_CPU_HAS_PREFETCH */
/*
* Stack alignment
*/
#define STACK_ALIGN ~(0xf)
/*
* Macros to handle different pointer/register sizes for 32/64-bit code
*/
/*
* Size of a register
*/
#ifndef __loongarch64
#define SZREG 4
#else
#define SZREG 8
#endif
/*
* Use the following macros in assemblercode to load/store registers,
* pointers etc.
*/
#if (SZREG == 4)
#define REG_L ld.w
#define REG_S st.w
#define REG_ADD add.w
#define REG_SUB sub.w
#else /* SZREG == 8 */
#define REG_L ld.d
#define REG_S st.d
#define REG_ADD add.d
#define REG_SUB sub.d
#endif
/*
* How to add/sub/load/store/shift C int variables.
*/
#if (__SIZEOF_INT__ == 4)
#define INT_ADD add.w
#define INT_ADDI addi.w
#define INT_SUB sub.w
#define INT_L ld.w
#define INT_S st.w
#define INT_SLL slli.w
#define INT_SLLV sll.w
#define INT_SRL srli.w
#define INT_SRLV srl.w
#define INT_SRA srai.w
#define INT_SRAV sra.w
#endif
#if (__SIZEOF_INT__ == 8)
#define INT_ADD add.d
#define INT_ADDI addi.d
#define INT_SUB sub.d
#define INT_L ld.d
#define INT_S st.d
#define INT_SLL slli.d
#define INT_SLLV sll.d
#define INT_SRL srli.d
#define INT_SRLV srl.d
#define INT_SRA srai.d
#define INT_SRAV sra.d
#endif
/*
* How to add/sub/load/store/shift C long variables.
*/
#if (__SIZEOF_LONG__ == 4)
#define LONG_ADD add.w
#define LONG_ADDI addi.w
#define LONG_SUB sub.w
#define LONG_L ld.w
#define LONG_S st.w
#define LONG_SLL slli.w
#define LONG_SLLV sll.w
#define LONG_SRL srli.w
#define LONG_SRLV srl.w
#define LONG_SRA srai.w
#define LONG_SRAV sra.w
#ifdef __ASSEMBLY__
#define LONG .word
#endif
#define LONGSIZE 4
#define LONGMASK 3
#define LONGLOG 2
#endif
#if (__SIZEOF_LONG__ == 8)
#define LONG_ADD add.d
#define LONG_ADDI addi.d
#define LONG_SUB sub.d
#define LONG_L ld.d
#define LONG_S st.d
#define LONG_SLL slli.d
#define LONG_SLLV sll.d
#define LONG_SRL srli.d
#define LONG_SRLV srl.d
#define LONG_SRA srai.d
#define LONG_SRAV sra.d
#ifdef __ASSEMBLY__
#define LONG .dword
#endif
#define LONGSIZE 8
#define LONGMASK 7
#define LONGLOG 3
#endif
/*
* How to add/sub/load/store/shift pointers.
*/
#if (__SIZEOF_POINTER__ == 4)
#define PTR_ADD add.w
#define PTR_ADDI addi.w
#define PTR_SUB sub.w
#define PTR_L ld.w
#define PTR_S st.w
#define PTR_LI li.w
#define PTR_SLL slli.w
#define PTR_SLLV sll.w
#define PTR_SRL srli.w
#define PTR_SRLV srl.w
#define PTR_SRA srai.w
#define PTR_SRAV sra.w
#define PTR_SCALESHIFT 2
#ifdef __ASSEMBLY__
#define PTR .word
#endif
#define PTRSIZE 4
#define PTRLOG 2
#endif
#if (__SIZEOF_POINTER__ == 8)
#define PTR_ADD add.d
#define PTR_ADDI addi.d
#define PTR_SUB sub.d
#define PTR_L ld.d
#define PTR_S st.d
#define PTR_LI li.d
#define PTR_SLL slli.d
#define PTR_SLLV sll.d
#define PTR_SRL srli.d
#define PTR_SRLV srl.d
#define PTR_SRA srai.d
#define PTR_SRAV sra.d
#define PTR_SCALESHIFT 3
#ifdef __ASSEMBLY__
#define PTR .dword
#endif
#define PTRSIZE 8
#define PTRLOG 3
#endif
#endif /* __ASM_ASM_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_ASMMACRO_H
#define _ASM_ASMMACRO_H
#include <asm/asm-offsets.h>
#include <asm/regdef.h>
#include <asm/fpregdef.h>
#include <asm/loongarch.h>
.macro parse_v var val
\var = \val
.endm
.macro parse_r var r
\var = -1
.ifc \r, $r0
\var = 0
.endif
.ifc \r, $r1
\var = 1
.endif
.ifc \r, $r2
\var = 2
.endif
.ifc \r, $r3
\var = 3
.endif
.ifc \r, $r4
\var = 4
.endif
.ifc \r, $r5
\var = 5
.endif
.ifc \r, $r6
\var = 6
.endif
.ifc \r, $r7
\var = 7
.endif
.ifc \r, $r8
\var = 8
.endif
.ifc \r, $r9
\var = 9
.endif
.ifc \r, $r10
\var = 10
.endif
.ifc \r, $r11
\var = 11
.endif
.ifc \r, $r12
\var = 12
.endif
.ifc \r, $r13
\var = 13
.endif
.ifc \r, $r14
\var = 14
.endif
.ifc \r, $r15
\var = 15
.endif
.ifc \r, $r16
\var = 16
.endif
.ifc \r, $r17
\var = 17
.endif
.ifc \r, $r18
\var = 18
.endif
.ifc \r, $r19
\var = 19
.endif
.ifc \r, $r20
\var = 20
.endif
.ifc \r, $r21
\var = 21
.endif
.ifc \r, $r22
\var = 22
.endif
.ifc \r, $r23
\var = 23
.endif
.ifc \r, $r24
\var = 24
.endif
.ifc \r, $r25
\var = 25
.endif
.ifc \r, $r26
\var = 26
.endif
.ifc \r, $r27
\var = 27
.endif
.ifc \r, $r28
\var = 28
.endif
.ifc \r, $r29
\var = 29
.endif
.ifc \r, $r30
\var = 30
.endif
.ifc \r, $r31
\var = 31
.endif
.iflt \var
.error "Unable to parse register name \r"
.endif
.endm
.macro cpu_save_nonscratch thread
stptr.d s0, \thread, THREAD_REG23
stptr.d s1, \thread, THREAD_REG24
stptr.d s2, \thread, THREAD_REG25
stptr.d s3, \thread, THREAD_REG26
stptr.d s4, \thread, THREAD_REG27
stptr.d s5, \thread, THREAD_REG28
stptr.d s6, \thread, THREAD_REG29
stptr.d s7, \thread, THREAD_REG30
stptr.d s8, \thread, THREAD_REG31
stptr.d sp, \thread, THREAD_REG03
stptr.d fp, \thread, THREAD_REG22
.endm
.macro cpu_restore_nonscratch thread
ldptr.d s0, \thread, THREAD_REG23
ldptr.d s1, \thread, THREAD_REG24
ldptr.d s2, \thread, THREAD_REG25
ldptr.d s3, \thread, THREAD_REG26
ldptr.d s4, \thread, THREAD_REG27
ldptr.d s5, \thread, THREAD_REG28
ldptr.d s6, \thread, THREAD_REG29
ldptr.d s7, \thread, THREAD_REG30
ldptr.d s8, \thread, THREAD_REG31
ldptr.d ra, \thread, THREAD_REG01
ldptr.d sp, \thread, THREAD_REG03
ldptr.d fp, \thread, THREAD_REG22
.endm
.macro fpu_save_csr thread tmp
movfcsr2gr \tmp, fcsr0
stptr.w \tmp, \thread, THREAD_FCSR
.endm
.macro fpu_restore_csr thread tmp
ldptr.w \tmp, \thread, THREAD_FCSR
movgr2fcsr fcsr0, \tmp
.endm
.macro fpu_save_cc thread tmp0 tmp1
movcf2gr \tmp0, $fcc0
move \tmp1, \tmp0
movcf2gr \tmp0, $fcc1
bstrins.d \tmp1, \tmp0, 15, 8
movcf2gr \tmp0, $fcc2
bstrins.d \tmp1, \tmp0, 23, 16
movcf2gr \tmp0, $fcc3
bstrins.d \tmp1, \tmp0, 31, 24
movcf2gr \tmp0, $fcc4
bstrins.d \tmp1, \tmp0, 39, 32
movcf2gr \tmp0, $fcc5
bstrins.d \tmp1, \tmp0, 47, 40
movcf2gr \tmp0, $fcc6
bstrins.d \tmp1, \tmp0, 55, 48
movcf2gr \tmp0, $fcc7
bstrins.d \tmp1, \tmp0, 63, 56
stptr.d \tmp1, \thread, THREAD_FCC
.endm
.macro fpu_restore_cc thread tmp0 tmp1
ldptr.d \tmp0, \thread, THREAD_FCC
bstrpick.d \tmp1, \tmp0, 7, 0
movgr2cf $fcc0, \tmp1
bstrpick.d \tmp1, \tmp0, 15, 8
movgr2cf $fcc1, \tmp1
bstrpick.d \tmp1, \tmp0, 23, 16
movgr2cf $fcc2, \tmp1
bstrpick.d \tmp1, \tmp0, 31, 24
movgr2cf $fcc3, \tmp1
bstrpick.d \tmp1, \tmp0, 39, 32
movgr2cf $fcc4, \tmp1
bstrpick.d \tmp1, \tmp0, 47, 40
movgr2cf $fcc5, \tmp1
bstrpick.d \tmp1, \tmp0, 55, 48
movgr2cf $fcc6, \tmp1
bstrpick.d \tmp1, \tmp0, 63, 56
movgr2cf $fcc7, \tmp1
.endm
.macro fpu_save_double thread tmp
li.w \tmp, THREAD_FPR0
PTR_ADD \tmp, \tmp, \thread
fst.d $f0, \tmp, THREAD_FPR0 - THREAD_FPR0
fst.d $f1, \tmp, THREAD_FPR1 - THREAD_FPR0
fst.d $f2, \tmp, THREAD_FPR2 - THREAD_FPR0
fst.d $f3, \tmp, THREAD_FPR3 - THREAD_FPR0
fst.d $f4, \tmp, THREAD_FPR4 - THREAD_FPR0
fst.d $f5, \tmp, THREAD_FPR5 - THREAD_FPR0
fst.d $f6, \tmp, THREAD_FPR6 - THREAD_FPR0
fst.d $f7, \tmp, THREAD_FPR7 - THREAD_FPR0
fst.d $f8, \tmp, THREAD_FPR8 - THREAD_FPR0
fst.d $f9, \tmp, THREAD_FPR9 - THREAD_FPR0
fst.d $f10, \tmp, THREAD_FPR10 - THREAD_FPR0
fst.d $f11, \tmp, THREAD_FPR11 - THREAD_FPR0
fst.d $f12, \tmp, THREAD_FPR12 - THREAD_FPR0
fst.d $f13, \tmp, THREAD_FPR13 - THREAD_FPR0
fst.d $f14, \tmp, THREAD_FPR14 - THREAD_FPR0
fst.d $f15, \tmp, THREAD_FPR15 - THREAD_FPR0
fst.d $f16, \tmp, THREAD_FPR16 - THREAD_FPR0
fst.d $f17, \tmp, THREAD_FPR17 - THREAD_FPR0
fst.d $f18, \tmp, THREAD_FPR18 - THREAD_FPR0
fst.d $f19, \tmp, THREAD_FPR19 - THREAD_FPR0
fst.d $f20, \tmp, THREAD_FPR20 - THREAD_FPR0
fst.d $f21, \tmp, THREAD_FPR21 - THREAD_FPR0
fst.d $f22, \tmp, THREAD_FPR22 - THREAD_FPR0
fst.d $f23, \tmp, THREAD_FPR23 - THREAD_FPR0
fst.d $f24, \tmp, THREAD_FPR24 - THREAD_FPR0
fst.d $f25, \tmp, THREAD_FPR25 - THREAD_FPR0
fst.d $f26, \tmp, THREAD_FPR26 - THREAD_FPR0
fst.d $f27, \tmp, THREAD_FPR27 - THREAD_FPR0
fst.d $f28, \tmp, THREAD_FPR28 - THREAD_FPR0
fst.d $f29, \tmp, THREAD_FPR29 - THREAD_FPR0
fst.d $f30, \tmp, THREAD_FPR30 - THREAD_FPR0
fst.d $f31, \tmp, THREAD_FPR31 - THREAD_FPR0
.endm
.macro fpu_restore_double thread tmp
li.w \tmp, THREAD_FPR0
PTR_ADD \tmp, \tmp, \thread
fld.d $f0, \tmp, THREAD_FPR0 - THREAD_FPR0
fld.d $f1, \tmp, THREAD_FPR1 - THREAD_FPR0
fld.d $f2, \tmp, THREAD_FPR2 - THREAD_FPR0
fld.d $f3, \tmp, THREAD_FPR3 - THREAD_FPR0
fld.d $f4, \tmp, THREAD_FPR4 - THREAD_FPR0
fld.d $f5, \tmp, THREAD_FPR5 - THREAD_FPR0
fld.d $f6, \tmp, THREAD_FPR6 - THREAD_FPR0
fld.d $f7, \tmp, THREAD_FPR7 - THREAD_FPR0
fld.d $f8, \tmp, THREAD_FPR8 - THREAD_FPR0
fld.d $f9, \tmp, THREAD_FPR9 - THREAD_FPR0
fld.d $f10, \tmp, THREAD_FPR10 - THREAD_FPR0
fld.d $f11, \tmp, THREAD_FPR11 - THREAD_FPR0
fld.d $f12, \tmp, THREAD_FPR12 - THREAD_FPR0
fld.d $f13, \tmp, THREAD_FPR13 - THREAD_FPR0
fld.d $f14, \tmp, THREAD_FPR14 - THREAD_FPR0
fld.d $f15, \tmp, THREAD_FPR15 - THREAD_FPR0
fld.d $f16, \tmp, THREAD_FPR16 - THREAD_FPR0
fld.d $f17, \tmp, THREAD_FPR17 - THREAD_FPR0
fld.d $f18, \tmp, THREAD_FPR18 - THREAD_FPR0
fld.d $f19, \tmp, THREAD_FPR19 - THREAD_FPR0
fld.d $f20, \tmp, THREAD_FPR20 - THREAD_FPR0
fld.d $f21, \tmp, THREAD_FPR21 - THREAD_FPR0
fld.d $f22, \tmp, THREAD_FPR22 - THREAD_FPR0
fld.d $f23, \tmp, THREAD_FPR23 - THREAD_FPR0
fld.d $f24, \tmp, THREAD_FPR24 - THREAD_FPR0
fld.d $f25, \tmp, THREAD_FPR25 - THREAD_FPR0
fld.d $f26, \tmp, THREAD_FPR26 - THREAD_FPR0
fld.d $f27, \tmp, THREAD_FPR27 - THREAD_FPR0
fld.d $f28, \tmp, THREAD_FPR28 - THREAD_FPR0
fld.d $f29, \tmp, THREAD_FPR29 - THREAD_FPR0
fld.d $f30, \tmp, THREAD_FPR30 - THREAD_FPR0
fld.d $f31, \tmp, THREAD_FPR31 - THREAD_FPR0
.endm
.macro not dst src
nor \dst, \src, zero
.endm
.macro bgt r0 r1 label
blt \r1, \r0, \label
.endm
.macro bltz r0 label
blt \r0, zero, \label
.endm
.macro bgez r0 label
bge \r0, zero, \label
.endm
#endif /* _ASM_ASMMACRO_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Atomic operations.
*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_ATOMIC_H
#define _ASM_ATOMIC_H
#include <linux/types.h>
#include <asm/barrier.h>
#include <asm/cmpxchg.h>
#include <asm/compiler.h>
#if __SIZEOF_LONG__ == 4
#define __LL "ll.w "
#define __SC "sc.w "
#define __AMADD "amadd.w "
#define __AMAND_DB "amand_db.w "
#define __AMOR_DB "amor_db.w "
#define __AMXOR_DB "amxor_db.w "
#elif __SIZEOF_LONG__ == 8
#define __LL "ll.d "
#define __SC "sc.d "
#define __AMADD "amadd.d "
#define __AMAND_DB "amand_db.d "
#define __AMOR_DB "amor_db.d "
#define __AMXOR_DB "amxor_db.d "
#endif
#define ATOMIC_INIT(i) { (i) }
/*
* arch_atomic_read - read atomic variable
* @v: pointer of type atomic_t
*
* Atomically reads the value of @v.
*/
#define arch_atomic_read(v) READ_ONCE((v)->counter)
/*
* arch_atomic_set - set atomic variable
* @v: pointer of type atomic_t
* @i: required value
*
* Atomically sets the value of @v to @i.
*/
#define arch_atomic_set(v, i) WRITE_ONCE((v)->counter, (i))
#define ATOMIC_OP(op, I, asm_op) \
static inline void arch_atomic_##op(int i, atomic_t *v) \
{ \
__asm__ __volatile__( \
"am"#asm_op"_db.w" " $zero, %1, %0 \n" \
: "+ZB" (v->counter) \
: "r" (I) \
: "memory"); \
}
#define ATOMIC_OP_RETURN(op, I, asm_op, c_op) \
static inline int arch_atomic_##op##_return_relaxed(int i, atomic_t *v) \
{ \
int result; \
\
__asm__ __volatile__( \
"am"#asm_op"_db.w" " %1, %2, %0 \n" \
: "+ZB" (v->counter), "=&r" (result) \
: "r" (I) \
: "memory"); \
\
return result c_op I; \
}
#define ATOMIC_FETCH_OP(op, I, asm_op) \
static inline int arch_atomic_fetch_##op##_relaxed(int i, atomic_t *v) \
{ \
int result; \
\
__asm__ __volatile__( \
"am"#asm_op"_db.w" " %1, %2, %0 \n" \
: "+ZB" (v->counter), "=&r" (result) \
: "r" (I) \
: "memory"); \
\
return result; \
}
#define ATOMIC_OPS(op, I, asm_op, c_op) \
ATOMIC_OP(op, I, asm_op) \
ATOMIC_OP_RETURN(op, I, asm_op, c_op) \
ATOMIC_FETCH_OP(op, I, asm_op)
ATOMIC_OPS(add, i, add, +)
ATOMIC_OPS(sub, -i, add, +)
#define arch_atomic_add_return_relaxed arch_atomic_add_return_relaxed
#define arch_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed
#define arch_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed
#define arch_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed
#undef ATOMIC_OPS
#define ATOMIC_OPS(op, I, asm_op) \
ATOMIC_OP(op, I, asm_op) \
ATOMIC_FETCH_OP(op, I, asm_op)
ATOMIC_OPS(and, i, and)
ATOMIC_OPS(or, i, or)
ATOMIC_OPS(xor, i, xor)
#define arch_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed
#define arch_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed
#define arch_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP
static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
int prev, rc;
__asm__ __volatile__ (
"0: ll.w %[p], %[c]\n"
" beq %[p], %[u], 1f\n"
" add.w %[rc], %[p], %[a]\n"
" sc.w %[rc], %[c]\n"
" beqz %[rc], 0b\n"
" b 2f\n"
"1:\n"
__WEAK_LLSC_MB
"2:\n"
: [p]"=&r" (prev), [rc]"=&r" (rc),
[c]"=ZB" (v->counter)
: [a]"r" (a), [u]"r" (u)
: "memory");
return prev;
}
#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
/*
* arch_atomic_sub_if_positive - conditionally subtract integer from atomic variable
* @i: integer value to subtract
* @v: pointer of type atomic_t
*
* Atomically test @v and subtract @i if @v is greater or equal than @i.
* The function returns the old value of @v minus @i.
*/
static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
{
int result;
int temp;
if (__builtin_constant_p(i)) {
__asm__ __volatile__(
"1: ll.w %1, %2 # atomic_sub_if_positive\n"
" addi.w %0, %1, %3 \n"
" or %1, %0, $zero \n"
" blt %0, $zero, 2f \n"
" sc.w %1, %2 \n"
" beq $zero, %1, 1b \n"
"2: \n"
__WEAK_LLSC_MB
: "=&r" (result), "=&r" (temp),
"+" GCC_OFF_SMALL_ASM() (v->counter)
: "I" (-i));
} else {
__asm__ __volatile__(
"1: ll.w %1, %2 # atomic_sub_if_positive\n"
" sub.w %0, %1, %3 \n"
" or %1, %0, $zero \n"
" blt %0, $zero, 2f \n"
" sc.w %1, %2 \n"
" beq $zero, %1, 1b \n"
"2: \n"
__WEAK_LLSC_MB
: "=&r" (result), "=&r" (temp),
"+" GCC_OFF_SMALL_ASM() (v->counter)
: "r" (i));
}
return result;
}
#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n)))
#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), (new)))
/*
* arch_atomic_dec_if_positive - decrement by 1 if old value positive
* @v: pointer of type atomic_t
*/
#define arch_atomic_dec_if_positive(v) arch_atomic_sub_if_positive(1, v)
#ifdef CONFIG_64BIT
#define ATOMIC64_INIT(i) { (i) }
/*
* arch_atomic64_read - read atomic variable
* @v: pointer of type atomic64_t
*
*/
#define arch_atomic64_read(v) READ_ONCE((v)->counter)
/*
* arch_atomic64_set - set atomic variable
* @v: pointer of type atomic64_t
* @i: required value
*/
#define arch_atomic64_set(v, i) WRITE_ONCE((v)->counter, (i))
#define ATOMIC64_OP(op, I, asm_op) \
static inline void arch_atomic64_##op(long i, atomic64_t *v) \
{ \
__asm__ __volatile__( \
"am"#asm_op"_db.d " " $zero, %1, %0 \n" \
: "+ZB" (v->counter) \
: "r" (I) \
: "memory"); \
}
#define ATOMIC64_OP_RETURN(op, I, asm_op, c_op) \
static inline long arch_atomic64_##op##_return_relaxed(long i, atomic64_t *v) \
{ \
long result; \
__asm__ __volatile__( \
"am"#asm_op"_db.d " " %1, %2, %0 \n" \
: "+ZB" (v->counter), "=&r" (result) \
: "r" (I) \
: "memory"); \
\
return result c_op I; \
}
#define ATOMIC64_FETCH_OP(op, I, asm_op) \
static inline long arch_atomic64_fetch_##op##_relaxed(long i, atomic64_t *v) \
{ \
long result; \
\
__asm__ __volatile__( \
"am"#asm_op"_db.d " " %1, %2, %0 \n" \
: "+ZB" (v->counter), "=&r" (result) \
: "r" (I) \
: "memory"); \
\
return result; \
}
#define ATOMIC64_OPS(op, I, asm_op, c_op) \
ATOMIC64_OP(op, I, asm_op) \
ATOMIC64_OP_RETURN(op, I, asm_op, c_op) \
ATOMIC64_FETCH_OP(op, I, asm_op)
ATOMIC64_OPS(add, i, add, +)
ATOMIC64_OPS(sub, -i, add, +)
#define arch_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed
#define arch_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed
#define arch_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed
#define arch_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed
#undef ATOMIC64_OPS
#define ATOMIC64_OPS(op, I, asm_op) \
ATOMIC64_OP(op, I, asm_op) \
ATOMIC64_FETCH_OP(op, I, asm_op)
ATOMIC64_OPS(and, i, and)
ATOMIC64_OPS(or, i, or)
ATOMIC64_OPS(xor, i, xor)
#define arch_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed
#define arch_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed
#define arch_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed
#undef ATOMIC64_OPS
#undef ATOMIC64_FETCH_OP
#undef ATOMIC64_OP_RETURN
#undef ATOMIC64_OP
static inline long arch_atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
{
long prev, rc;
__asm__ __volatile__ (
"0: ll.d %[p], %[c]\n"
" beq %[p], %[u], 1f\n"
" add.d %[rc], %[p], %[a]\n"
" sc.d %[rc], %[c]\n"
" beqz %[rc], 0b\n"
" b 2f\n"
"1:\n"
__WEAK_LLSC_MB
"2:\n"
: [p]"=&r" (prev), [rc]"=&r" (rc),
[c] "=ZB" (v->counter)
: [a]"r" (a), [u]"r" (u)
: "memory");
return prev;
}
#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
/*
* arch_atomic64_sub_if_positive - conditionally subtract integer from atomic variable
* @i: integer value to subtract
* @v: pointer of type atomic64_t
*
* Atomically test @v and subtract @i if @v is greater or equal than @i.
* The function returns the old value of @v minus @i.
*/
static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v)
{
long result;
long temp;
if (__builtin_constant_p(i)) {
__asm__ __volatile__(
"1: ll.d %1, %2 # atomic64_sub_if_positive \n"
" addi.d %0, %1, %3 \n"
" or %1, %0, $zero \n"
" blt %0, $zero, 2f \n"
" sc.d %1, %2 \n"
" beq %1, $zero, 1b \n"
"2: \n"
__WEAK_LLSC_MB
: "=&r" (result), "=&r" (temp),
"+" GCC_OFF_SMALL_ASM() (v->counter)
: "I" (-i));
} else {
__asm__ __volatile__(
"1: ll.d %1, %2 # atomic64_sub_if_positive \n"
" sub.d %0, %1, %3 \n"
" or %1, %0, $zero \n"
" blt %0, $zero, 2f \n"
" sc.d %1, %2 \n"
" beq %1, $zero, 1b \n"
"2: \n"
__WEAK_LLSC_MB
: "=&r" (result), "=&r" (temp),
"+" GCC_OFF_SMALL_ASM() (v->counter)
: "r" (i));
}
return result;
}
#define arch_atomic64_cmpxchg(v, o, n) \
((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n)))
#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), (new)))
/*
* arch_atomic64_dec_if_positive - decrement by 1 if old value positive
* @v: pointer of type atomic64_t
*/
#define arch_atomic64_dec_if_positive(v) arch_atomic64_sub_if_positive(1, v)
#endif /* CONFIG_64BIT */
#endif /* _ASM_ATOMIC_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef __ASM_BARRIER_H
#define __ASM_BARRIER_H
#define __sync() __asm__ __volatile__("dbar 0" : : : "memory")
#define fast_wmb() __sync()
#define fast_rmb() __sync()
#define fast_mb() __sync()
#define fast_iob() __sync()
#define wbflush() __sync()
#define wmb() fast_wmb()
#define rmb() fast_rmb()
#define mb() fast_mb()
#define iob() fast_iob()
#define __smp_mb() __asm__ __volatile__("dbar 0" : : : "memory")
#define __smp_rmb() __asm__ __volatile__("dbar 0" : : : "memory")
#define __smp_wmb() __asm__ __volatile__("dbar 0" : : : "memory")
#ifdef CONFIG_SMP
#define __WEAK_LLSC_MB " dbar 0 \n"
#else
#define __WEAK_LLSC_MB " \n"
#endif
#define __smp_mb__before_atomic() barrier()
#define __smp_mb__after_atomic() barrier()
/**
* array_index_mask_nospec() - generate a ~0 mask when index < size, 0 otherwise
* @index: array element index
* @size: number of elements in array
*
* Returns:
* 0 - (@index < @size)
*/
#define array_index_mask_nospec array_index_mask_nospec
static inline unsigned long array_index_mask_nospec(unsigned long index,
unsigned long size)
{
unsigned long mask;
__asm__ __volatile__(
"sltu %0, %1, %2\n\t"
#if (__SIZEOF_LONG__ == 4)
"sub.w %0, $r0, %0\n\t"
#elif (__SIZEOF_LONG__ == 8)
"sub.d %0, $r0, %0\n\t"
#endif
: "=r" (mask)
: "r" (index), "r" (size)
:);
return mask;
}
#define __smp_load_acquire(p) \
({ \
union { typeof(*p) __val; char __c[1]; } __u; \
unsigned long __tmp = 0; \
compiletime_assert_atomic_type(*p); \
switch (sizeof(*p)) { \
case 1: \
*(__u8 *)__u.__c = *(volatile __u8 *)p; \
__smp_mb(); \
break; \
case 2: \
*(__u16 *)__u.__c = *(volatile __u16 *)p; \
__smp_mb(); \
break; \
case 4: \
__asm__ __volatile__( \
"amor_db.w %[val], %[tmp], %[mem] \n" \
: [val] "=&r" (*(__u32 *)__u.__c) \
: [mem] "ZB" (*(u32 *) p), [tmp] "r" (__tmp) \
: "memory"); \
break; \
case 8: \
__asm__ __volatile__( \
"amor_db.d %[val], %[tmp], %[mem] \n" \
: [val] "=&r" (*(__u64 *)__u.__c) \
: [mem] "ZB" (*(u64 *) p), [tmp] "r" (__tmp) \
: "memory"); \
break; \
} \
(typeof(*p))__u.__val; \
})
#define __smp_store_release(p, v) \
do { \
union { typeof(*p) __val; char __c[1]; } __u = \
{ .__val = (__force typeof(*p)) (v) }; \
unsigned long __tmp; \
compiletime_assert_atomic_type(*p); \
switch (sizeof(*p)) { \
case 1: \
__smp_mb(); \
*(volatile __u8 *)p = *(__u8 *)__u.__c; \
break; \
case 2: \
__smp_mb(); \
*(volatile __u16 *)p = *(__u16 *)__u.__c; \
break; \
case 4: \
__asm__ __volatile__( \
"amswap_db.w %[tmp], %[val], %[mem] \n" \
: [mem] "+ZB" (*(u32 *)p), [tmp] "=&r" (__tmp) \
: [val] "r" (*(__u32 *)__u.__c) \
: ); \
break; \
case 8: \
__asm__ __volatile__( \
"amswap_db.d %[tmp], %[val], %[mem] \n" \
: [mem] "+ZB" (*(u64 *)p), [tmp] "=&r" (__tmp) \
: [val] "r" (*(__u64 *)__u.__c) \
: ); \
break; \
} \
} while (0)
#define __smp_store_mb(p, v) \
do { \
union { typeof(p) __val; char __c[1]; } __u = \
{ .__val = (__force typeof(p)) (v) }; \
unsigned long __tmp; \
switch (sizeof(p)) { \
case 1: \
*(volatile __u8 *)&p = *(__u8 *)__u.__c; \
__smp_mb(); \
break; \
case 2: \
*(volatile __u16 *)&p = *(__u16 *)__u.__c; \
__smp_mb(); \
break; \
case 4: \
__asm__ __volatile__( \
"amswap_db.w %[tmp], %[val], %[mem] \n" \
: [mem] "+ZB" (*(u32 *)&p), [tmp] "=&r" (__tmp) \
: [val] "r" (*(__u32 *)__u.__c) \
: ); \
break; \
case 8: \
__asm__ __volatile__( \
"amswap_db.d %[tmp], %[val], %[mem] \n" \
: [mem] "+ZB" (*(u64 *)&p), [tmp] "=&r" (__tmp) \
: [val] "r" (*(__u64 *)__u.__c) \
: ); \
break; \
} \
} while (0)
#include <asm-generic/barrier.h>
#endif /* __ASM_BARRIER_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_BITOPS_H
#define _ASM_BITOPS_H
#include <linux/compiler.h>
#ifndef _LINUX_BITOPS_H
#error only <linux/bitops.h> can be included directly
#endif
#include <asm/barrier.h>
#include <asm-generic/bitops/builtin-ffs.h>
#include <asm-generic/bitops/builtin-fls.h>
#include <asm-generic/bitops/builtin-__ffs.h>
#include <asm-generic/bitops/builtin-__fls.h>
#include <asm-generic/bitops/ffz.h>
#include <asm-generic/bitops/fls64.h>
#include <asm-generic/bitops/sched.h>
#include <asm-generic/bitops/hweight.h>
#include <asm-generic/bitops/atomic.h>
#include <asm-generic/bitops/non-atomic.h>
#include <asm-generic/bitops/lock.h>
#include <asm-generic/bitops/le.h>
#include <asm-generic/bitops/ext2-atomic.h>
#endif /* _ASM_BITOPS_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef __LOONGARCH_ASM_BITREV_H__
#define __LOONGARCH_ASM_BITREV_H__
#include <linux/swab.h>
static __always_inline __attribute_const__ u32 __arch_bitrev32(u32 x)
{
u32 ret;
asm("bitrev.4b %0, %1" : "=r"(ret) : "r"(__swab32(x)));
return ret;
}
static __always_inline __attribute_const__ u16 __arch_bitrev16(u16 x)
{
u16 ret;
asm("bitrev.4b %0, %1" : "=r"(ret) : "r"(__swab16(x)));
return ret;
}
static __always_inline __attribute_const__ u8 __arch_bitrev8(u8 x)
{
u8 ret;
asm("bitrev.4b %0, %1" : "=r"(ret) : "r"(x));
return ret;
}
#endif /* __LOONGARCH_ASM_BITREV_H__ */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_BOOTINFO_H
#define _ASM_BOOTINFO_H
#include <linux/types.h>
#include <asm/setup.h>
const char *get_system_type(void);
extern void init_environ(void);
extern void memblock_init(void);
extern void platform_init(void);
extern void plat_swiotlb_setup(void);
extern int __init init_numa_memory(void);
struct loongson_board_info {
int bios_size;
const char *bios_vendor;
const char *bios_version;
const char *bios_release_date;
const char *board_name;
const char *board_vendor;
};
struct loongson_system_configuration {
int nr_cpus;
int nr_nodes;
int nr_io_pics;
int boot_cpu_id;
int cores_per_node;
int cores_per_package;
const char *cpuname;
};
extern u64 efi_system_table;
extern unsigned long fw_arg0, fw_arg1;
extern struct loongson_board_info b_info;
extern struct loongson_system_configuration loongson_sysconf;
#endif /* _ASM_BOOTINFO_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_BRANCH_H
#define _ASM_BRANCH_H
#include <asm/ptrace.h>
static inline unsigned long exception_era(struct pt_regs *regs)
{
return regs->csr_era;
}
static inline int compute_return_era(struct pt_regs *regs)
{
regs->csr_era += 4;
return 0;
}
#endif /* _ASM_BRANCH_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_BUG_H
#define __ASM_BUG_H
#include <linux/compiler.h>
#ifdef CONFIG_BUG
#include <asm/break.h>
static inline void __noreturn BUG(void)
{
__asm__ __volatile__("break %0" : : "i" (BRK_BUG));
unreachable();
}
#define HAVE_ARCH_BUG
#endif
#include <asm-generic/bug.h>
#endif /* __ASM_BUG_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_CACHE_H
#define _ASM_CACHE_H
#define L1_CACHE_SHIFT CONFIG_L1_CACHE_SHIFT
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
#define __read_mostly __section(".data..read_mostly")
#endif /* _ASM_CACHE_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_CACHEFLUSH_H
#define _ASM_CACHEFLUSH_H
#include <linux/mm.h>
#include <asm/cpu-features.h>
#include <asm/cacheops.h>
extern void local_flush_icache_range(unsigned long start, unsigned long end);
#define flush_icache_range local_flush_icache_range
#define flush_icache_user_range local_flush_icache_range
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 0
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define flush_cache_vmap(start, end) do { } while (0)
#define flush_cache_vunmap(start, end) do { } while (0)
#define flush_icache_page(vma, page) do { } while (0)
#define flush_icache_user_page(vma, page, addr, len) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
#define flush_dcache_mmap_lock(mapping) do { } while (0)
#define flush_dcache_mmap_unlock(mapping) do { } while (0)
#define cache_op(op, addr) \
__asm__ __volatile__( \
" cacop %0, %1 \n" \
: \
: "i" (op), "ZC" (*(unsigned char *)(addr)))
static inline void flush_icache_line_indexed(unsigned long addr)
{
cache_op(Index_Invalidate_I, addr);
}
static inline void flush_dcache_line_indexed(unsigned long addr)
{
cache_op(Index_Writeback_Inv_D, addr);
}
static inline void flush_vcache_line_indexed(unsigned long addr)
{
cache_op(Index_Writeback_Inv_V, addr);
}
static inline void flush_scache_line_indexed(unsigned long addr)
{
cache_op(Index_Writeback_Inv_S, addr);
}
static inline void flush_icache_line(unsigned long addr)
{
cache_op(Hit_Invalidate_I, addr);
}
static inline void flush_dcache_line(unsigned long addr)
{
cache_op(Hit_Writeback_Inv_D, addr);
}
static inline void flush_vcache_line(unsigned long addr)
{
cache_op(Hit_Writeback_Inv_V, addr);
}
static inline void flush_scache_line(unsigned long addr)
{
cache_op(Hit_Writeback_Inv_S, addr);
}
#include <asm-generic/cacheflush.h>
#endif /* _ASM_CACHEFLUSH_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Cache operations for the cache instruction.
*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef __ASM_CACHEOPS_H
#define __ASM_CACHEOPS_H
/*
* Most cache ops are split into a 2 bit field identifying the cache, and a 3
* bit field identifying the cache operation.
*/
#define CacheOp_Cache 0x03
#define CacheOp_Op 0x1c
#define Cache_I 0x00
#define Cache_D 0x01
#define Cache_V 0x02
#define Cache_S 0x03
#define Index_Invalidate 0x08
#define Index_Writeback_Inv 0x08
#define Hit_Invalidate 0x10
#define Hit_Writeback_Inv 0x10
#define CacheOp_User_Defined 0x18
#define Index_Invalidate_I (Cache_I | Index_Invalidate)
#define Index_Writeback_Inv_D (Cache_D | Index_Writeback_Inv)
#define Index_Writeback_Inv_V (Cache_V | Index_Writeback_Inv)
#define Index_Writeback_Inv_S (Cache_S | Index_Writeback_Inv)
#define Hit_Invalidate_I (Cache_I | Hit_Invalidate)
#define Hit_Writeback_Inv_D (Cache_D | Hit_Writeback_Inv)
#define Hit_Writeback_Inv_V (Cache_V | Hit_Writeback_Inv)
#define Hit_Writeback_Inv_S (Cache_S | Hit_Writeback_Inv)
#endif /* __ASM_CACHEOPS_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Author: Huacai Chen <chenhuacai@loongson.cn>
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef __ASM_CLOCKSOURCE_H
#define __ASM_CLOCKSOURCE_H
#include <asm/vdso/clocksource.h>
#endif /* __ASM_CLOCKSOURCE_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef __ASM_CMPXCHG_H
#define __ASM_CMPXCHG_H
#include <asm/barrier.h>
#include <linux/build_bug.h>
#define __xchg_asm(amswap_db, m, val) \
({ \
__typeof(val) __ret; \
\
__asm__ __volatile__ ( \
" "amswap_db" %1, %z2, %0 \n" \
: "+ZB" (*m), "=&r" (__ret) \
: "Jr" (val) \
: "memory"); \
\
__ret; \
})
static inline unsigned long __xchg(volatile void *ptr, unsigned long x,
int size)
{
switch (size) {
case 4:
return __xchg_asm("amswap_db.w", (volatile u32 *)ptr, (u32)x);
case 8:
return __xchg_asm("amswap_db.d", (volatile u64 *)ptr, (u64)x);
default:
BUILD_BUG();
}
return 0;
}
#define arch_xchg(ptr, x) \
({ \
__typeof__(*(ptr)) __res; \
\
__res = (__typeof__(*(ptr))) \
__xchg((ptr), (unsigned long)(x), sizeof(*(ptr))); \
\
__res; \
})
#define __cmpxchg_asm(ld, st, m, old, new) \
({ \
__typeof(old) __ret; \
\
__asm__ __volatile__( \
"1: " ld " %0, %2 # __cmpxchg_asm \n" \
" bne %0, %z3, 2f \n" \
" or $t0, %z4, $zero \n" \
" " st " $t0, %1 \n" \
" beq $zero, $t0, 1b \n" \
"2: \n" \
__WEAK_LLSC_MB \
: "=&r" (__ret), "=ZB"(*m) \
: "ZB"(*m), "Jr" (old), "Jr" (new) \
: "t0", "memory"); \
\
__ret; \
})
static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
unsigned long new, unsigned int size)
{
switch (size) {
case 4:
return __cmpxchg_asm("ll.w", "sc.w", (volatile u32 *)ptr,
(u32)old, new);
case 8:
return __cmpxchg_asm("ll.d", "sc.d", (volatile u64 *)ptr,
(u64)old, new);
default:
BUILD_BUG();
}
return 0;
}
#define arch_cmpxchg_local(ptr, old, new) \
((__typeof__(*(ptr))) \
__cmpxchg((ptr), \
(unsigned long)(__typeof__(*(ptr)))(old), \
(unsigned long)(__typeof__(*(ptr)))(new), \
sizeof(*(ptr))))
#define arch_cmpxchg(ptr, old, new) \
({ \
__typeof__(*(ptr)) __res; \
\
__res = arch_cmpxchg_local((ptr), (old), (new)); \
\
__res; \
})
#ifdef CONFIG_64BIT
#define arch_cmpxchg64_local(ptr, o, n) \
({ \
BUILD_BUG_ON(sizeof(*(ptr)) != 8); \
arch_cmpxchg_local((ptr), (o), (n)); \
})
#define arch_cmpxchg64(ptr, o, n) \
({ \
BUILD_BUG_ON(sizeof(*(ptr)) != 8); \
arch_cmpxchg((ptr), (o), (n)); \
})
#else
#include <asm-generic/cmpxchg-local.h>
#define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
#define arch_cmpxchg64(ptr, o, n) arch_cmpxchg64_local((ptr), (o), (n))
#endif
#endif /* __ASM_CMPXCHG_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_COMPILER_H
#define _ASM_COMPILER_H
#define GCC_OFF_SMALL_ASM() "ZC"
#define LOONGARCH_ISA_LEVEL "loongarch"
#define LOONGARCH_ISA_ARCH_LEVEL "arch=loongarch"
#define LOONGARCH_ISA_LEVEL_RAW loongarch
#define LOONGARCH_ISA_ARCH_LEVEL_RAW LOONGARCH_ISA_LEVEL_RAW
#endif /* _ASM_COMPILER_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*
* Derived from MIPS:
* Copyright (C) 2003, 2004 Ralf Baechle
* Copyright (C) 2004 Maciej W. Rozycki
*/
#ifndef __ASM_CPU_FEATURES_H
#define __ASM_CPU_FEATURES_H
#include <asm/cpu.h>
#include <asm/cpu-info.h>
#define cpu_opt(opt) (cpu_data[0].options & (opt))
#define cpu_has(feat) (cpu_data[0].options & BIT_ULL(feat))
#define cpu_has_loongarch (cpu_has_loongarch32 | cpu_has_loongarch64)
#define cpu_has_loongarch32 (cpu_data[0].isa_level & LOONGARCH_CPU_ISA_32BIT)
#define cpu_has_loongarch64 (cpu_data[0].isa_level & LOONGARCH_CPU_ISA_64BIT)
#define cpu_icache_line_size() cpu_data[0].icache.linesz
#define cpu_dcache_line_size() cpu_data[0].dcache.linesz
#define cpu_vcache_line_size() cpu_data[0].vcache.linesz
#define cpu_scache_line_size() cpu_data[0].scache.linesz
#ifdef CONFIG_32BIT
# define cpu_has_64bits (cpu_data[0].isa_level & LOONGARCH_CPU_ISA_64BIT)
# define cpu_vabits 31
# define cpu_pabits 31
#endif
#ifdef CONFIG_64BIT
# define cpu_has_64bits 1
# define cpu_vabits cpu_data[0].vabits
# define cpu_pabits cpu_data[0].pabits
# define __NEED_ADDRBITS_PROBE
#endif
/*
* SMP assumption: Options of CPU 0 are a superset of all processors.
* This is true for all known LoongArch systems.
*/
#define cpu_has_cpucfg cpu_opt(LOONGARCH_CPU_CPUCFG)
#define cpu_has_lam cpu_opt(LOONGARCH_CPU_LAM)
#define cpu_has_ual cpu_opt(LOONGARCH_CPU_UAL)
#define cpu_has_fpu cpu_opt(LOONGARCH_CPU_FPU)
#define cpu_has_lsx cpu_opt(LOONGARCH_CPU_LSX)
#define cpu_has_lasx cpu_opt(LOONGARCH_CPU_LASX)
#define cpu_has_complex cpu_opt(LOONGARCH_CPU_COMPLEX)
#define cpu_has_crypto cpu_opt(LOONGARCH_CPU_CRYPTO)
#define cpu_has_lvz cpu_opt(LOONGARCH_CPU_LVZ)
#define cpu_has_lbt_x86 cpu_opt(LOONGARCH_CPU_LBT_X86)
#define cpu_has_lbt_arm cpu_opt(LOONGARCH_CPU_LBT_ARM)
#define cpu_has_lbt_mips cpu_opt(LOONGARCH_CPU_LBT_MIPS)
#define cpu_has_lbt (cpu_has_lbt_x86|cpu_has_lbt_arm|cpu_has_lbt_mips)
#define cpu_has_csr cpu_opt(LOONGARCH_CPU_CSR)
#define cpu_has_tlb cpu_opt(LOONGARCH_CPU_TLB)
#define cpu_has_watch cpu_opt(LOONGARCH_CPU_WATCH)
#define cpu_has_vint cpu_opt(LOONGARCH_CPU_VINT)
#define cpu_has_csripi cpu_opt(LOONGARCH_CPU_CSRIPI)
#define cpu_has_extioi cpu_opt(LOONGARCH_CPU_EXTIOI)
#define cpu_has_prefetch cpu_opt(LOONGARCH_CPU_PREFETCH)
#define cpu_has_pmp cpu_opt(LOONGARCH_CPU_PMP)
#define cpu_has_perf cpu_opt(LOONGARCH_CPU_PMP)
#define cpu_has_scalefreq cpu_opt(LOONGARCH_CPU_SCALEFREQ)
#define cpu_has_flatmode cpu_opt(LOONGARCH_CPU_FLATMODE)
#define cpu_has_eiodecode cpu_opt(LOONGARCH_CPU_EIODECODE)
#define cpu_has_guestid cpu_opt(LOONGARCH_CPU_GUESTID)
#define cpu_has_hypervisor cpu_opt(LOONGARCH_CPU_HYPERVISOR)
#endif /* __ASM_CPU_FEATURES_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef __ASM_CPU_INFO_H
#define __ASM_CPU_INFO_H
#include <linux/cache.h>
#include <linux/types.h>
#include <asm/loongarch.h>
/*
* Descriptor for a cache
*/
struct cache_desc {
unsigned int waysize; /* Bytes per way */
unsigned short sets; /* Number of lines per set */
unsigned char ways; /* Number of ways */
unsigned char linesz; /* Size of line in bytes */
unsigned char waybit; /* Bits to select in a cache set */
unsigned char flags; /* Flags describing cache properties */
};
struct cpuinfo_loongarch {
u64 asid_cache;
unsigned long asid_mask;
/*
* Capability and feature descriptor structure for LoongArch CPU
*/
unsigned long long options;
unsigned int processor_id;
unsigned int fpu_vers;
unsigned int fpu_csr0;
unsigned int fpu_mask;
unsigned int cputype;
int isa_level;
int tlbsize;
int tlbsizemtlb;
int tlbsizestlbsets;
int tlbsizestlbways;
struct cache_desc icache; /* Primary I-cache */
struct cache_desc dcache; /* Primary D or combined I/D cache */
struct cache_desc vcache; /* Victim cache, between pcache and scache */
struct cache_desc scache; /* Secondary cache */
struct cache_desc tcache; /* Tertiary/split secondary cache */
int core; /* physical core number in package */
int package;/* physical package number */
int vabits; /* Virtual Address size in bits */
int pabits; /* Physical Address size in bits */
unsigned int ksave_mask; /* Usable KSave mask. */
unsigned int watch_dreg_count; /* Number data breakpoints */
unsigned int watch_ireg_count; /* Number instruction breakpoints */
unsigned int watch_reg_use_cnt; /* min(NUM_WATCH_REGS, watch_dreg_count + watch_ireg_count), Usable by ptrace */
} __aligned(SMP_CACHE_BYTES);
extern struct cpuinfo_loongarch cpu_data[];
#define boot_cpu_data cpu_data[0]
#define current_cpu_data cpu_data[smp_processor_id()]
#define raw_current_cpu_data cpu_data[raw_smp_processor_id()]
extern void cpu_probe(void);
extern const char *__cpu_family[];
extern const char *__cpu_full_name[];
#define cpu_family_string() __cpu_family[raw_smp_processor_id()]
#define cpu_full_name_string() __cpu_full_name[raw_smp_processor_id()]
struct seq_file;
struct notifier_block;
extern int register_proc_cpuinfo_notifier(struct notifier_block *nb);
extern int proc_cpuinfo_notifier_call_chain(unsigned long val, void *v);
#define proc_cpuinfo_notifier(fn, pri) \
({ \
static struct notifier_block fn##_nb = { \
.notifier_call = fn, \
.priority = pri \
}; \
\
register_proc_cpuinfo_notifier(&fn##_nb); \
})
struct proc_cpuinfo_notifier_args {
struct seq_file *m;
unsigned long n;
};
static inline bool cpus_are_siblings(int cpua, int cpub)
{
struct cpuinfo_loongarch *infoa = &cpu_data[cpua];
struct cpuinfo_loongarch *infob = &cpu_data[cpub];
if (infoa->package != infob->package)
return false;
if (infoa->core != infob->core)
return false;
return true;
}
static inline unsigned long cpu_asid_mask(struct cpuinfo_loongarch *cpuinfo)
{
return cpuinfo->asid_mask;
}
static inline void set_cpu_asid_mask(struct cpuinfo_loongarch *cpuinfo,
unsigned long asid_mask)
{
cpuinfo->asid_mask = asid_mask;
}
#endif /* __ASM_CPU_INFO_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* cpu.h: Values of the PRID register used to match up
* various LoongArch CPU types.
*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_CPU_H
#define _ASM_CPU_H
/*
* As described in LoongArch specs from Loongson Technology, the PRID register
* (CPUCFG.00) has the following layout:
*
* +---------------+----------------+------------+--------------------+
* | Reserved | Company ID | Series ID | Product ID |
* +---------------+----------------+------------+--------------------+
* 31 24 23 16 15 12 11 0
*/
/*
* Assigned Company values for bits 23:16 of the PRID register.
*/
#define PRID_COMP_MASK 0xff0000
#define PRID_COMP_LOONGSON 0x140000
/*
* Assigned Series ID values for bits 15:12 of the PRID register. In order
* to detect a certain CPU type exactly eventually additional registers may
* need to be examined.
*/
#define PRID_SERIES_MASK 0xf000
#define PRID_SERIES_LA132 0x8000 /* Loongson 32bit */
#define PRID_SERIES_LA264 0xa000 /* Loongson 64bit, 2-issue */
#define PRID_SERIES_LA364 0xb000 /* Loongson 64bit,3-issue */
#define PRID_SERIES_LA464 0xc000 /* Loongson 64bit, 4-issue */
#define PRID_SERIES_LA664 0xd000 /* Loongson 64bit, 6-issue */
/*
* Particular Product ID values for bits 11:0 of the PRID register.
*/
#define PRID_PRODUCT_MASK 0x0fff
#if !defined(__ASSEMBLY__)
enum cpu_type_enum {
CPU_UNKNOWN,
CPU_LOONGSON32,
CPU_LOONGSON64,
CPU_LAST
};
#endif /* !__ASSEMBLY */
/*
* ISA Level encodings
*
*/
#define LOONGARCH_CPU_ISA_LA32R 0x00000001
#define LOONGARCH_CPU_ISA_LA32S 0x00000002
#define LOONGARCH_CPU_ISA_LA64 0x00000004
#define LOONGARCH_CPU_ISA_32BIT (LOONGARCH_CPU_ISA_LA32R | LOONGARCH_CPU_ISA_LA32S)
#define LOONGARCH_CPU_ISA_64BIT LOONGARCH_CPU_ISA_LA64
/*
* CPU Option encodings
*/
#define CPU_FEATURE_CPUCFG 0 /* CPU has CPUCFG */
#define CPU_FEATURE_LAM 1 /* CPU has Atomic instructions */
#define CPU_FEATURE_UAL 2 /* CPU supports unaligned access */
#define CPU_FEATURE_FPU 3 /* CPU has FPU */
#define CPU_FEATURE_LSX 4 /* CPU has LSX (128-bit SIMD) */
#define CPU_FEATURE_LASX 5 /* CPU has LASX (256-bit SIMD) */
#define CPU_FEATURE_COMPLEX 6 /* CPU has Complex instructions */
#define CPU_FEATURE_CRYPTO 7 /* CPU has Crypto instructions */
#define CPU_FEATURE_LVZ 8 /* CPU has Virtualization extension */
#define CPU_FEATURE_LBT_X86 9 /* CPU has X86 Binary Translation */
#define CPU_FEATURE_LBT_ARM 10 /* CPU has ARM Binary Translation */
#define CPU_FEATURE_LBT_MIPS 11 /* CPU has MIPS Binary Translation */
#define CPU_FEATURE_TLB 12 /* CPU has TLB */
#define CPU_FEATURE_CSR 13 /* CPU has CSR */
#define CPU_FEATURE_WATCH 14 /* CPU has watchpoint registers */
#define CPU_FEATURE_VINT 15 /* CPU has vectored interrupts */
#define CPU_FEATURE_CSRIPI 16 /* CPU has CSR-IPI */
#define CPU_FEATURE_EXTIOI 17 /* CPU has EXT-IOI */
#define CPU_FEATURE_PREFETCH 18 /* CPU has prefetch instructions */
#define CPU_FEATURE_PMP 19 /* CPU has perfermance counter */
#define CPU_FEATURE_SCALEFREQ 20 /* CPU supports cpufreq scaling */
#define CPU_FEATURE_FLATMODE 21 /* CPU has flat mode */
#define CPU_FEATURE_EIODECODE 22 /* CPU has EXTIOI interrupt pin decode mode */
#define CPU_FEATURE_GUESTID 23 /* CPU has GuestID feature */
#define CPU_FEATURE_HYPERVISOR 24 /* CPU has hypervisor (running in VM) */
#define LOONGARCH_CPU_CPUCFG BIT_ULL(CPU_FEATURE_CPUCFG)
#define LOONGARCH_CPU_LAM BIT_ULL(CPU_FEATURE_LAM)
#define LOONGARCH_CPU_UAL BIT_ULL(CPU_FEATURE_UAL)
#define LOONGARCH_CPU_FPU BIT_ULL(CPU_FEATURE_FPU)
#define LOONGARCH_CPU_LSX BIT_ULL(CPU_FEATURE_LSX)
#define LOONGARCH_CPU_LASX BIT_ULL(CPU_FEATURE_LASX)
#define LOONGARCH_CPU_COMPLEX BIT_ULL(CPU_FEATURE_COMPLEX)
#define LOONGARCH_CPU_CRYPTO BIT_ULL(CPU_FEATURE_CRYPTO)
#define LOONGARCH_CPU_LVZ BIT_ULL(CPU_FEATURE_LVZ)
#define LOONGARCH_CPU_LBT_X86 BIT_ULL(CPU_FEATURE_LBT_X86)
#define LOONGARCH_CPU_LBT_ARM BIT_ULL(CPU_FEATURE_LBT_ARM)
#define LOONGARCH_CPU_LBT_MIPS BIT_ULL(CPU_FEATURE_LBT_MIPS)
#define LOONGARCH_CPU_TLB BIT_ULL(CPU_FEATURE_TLB)
#define LOONGARCH_CPU_CSR BIT_ULL(CPU_FEATURE_CSR)
#define LOONGARCH_CPU_WATCH BIT_ULL(CPU_FEATURE_WATCH)
#define LOONGARCH_CPU_VINT BIT_ULL(CPU_FEATURE_VINT)
#define LOONGARCH_CPU_CSRIPI BIT_ULL(CPU_FEATURE_CSRIPI)
#define LOONGARCH_CPU_EXTIOI BIT_ULL(CPU_FEATURE_EXTIOI)
#define LOONGARCH_CPU_PREFETCH BIT_ULL(CPU_FEATURE_PREFETCH)
#define LOONGARCH_CPU_PMP BIT_ULL(CPU_FEATURE_PMP)
#define LOONGARCH_CPU_SCALEFREQ BIT_ULL(CPU_FEATURE_SCALEFREQ)
#define LOONGARCH_CPU_FLATMODE BIT_ULL(CPU_FEATURE_FLATMODE)
#define LOONGARCH_CPU_EIODECODE BIT_ULL(CPU_FEATURE_EIODECODE)
#define LOONGARCH_CPU_GUESTID BIT_ULL(CPU_FEATURE_GUESTID)
#define LOONGARCH_CPU_HYPERVISOR BIT_ULL(CPU_FEATURE_HYPERVISOR)
#endif /* _ASM_CPU_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* CPU feature definitions for module loading, used by
* module_cpu_feature_match(), see uapi/asm/hwcap.h for LoongArch CPU features.
*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef __ASM_CPUFEATURE_H
#define __ASM_CPUFEATURE_H
#include <uapi/asm/hwcap.h>
#include <asm/elf.h>
#define MAX_CPU_FEATURES (8 * sizeof(elf_hwcap))
#define cpu_feature(x) ilog2(HWCAP_ ## x)
static inline bool cpu_have_feature(unsigned int num)
{
return elf_hwcap & (1UL << num);
}
#endif /* __ASM_CPUFEATURE_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_DELAY_H
#define _ASM_DELAY_H
#include <linux/param.h>
extern void __delay(unsigned long cycles);
extern void __ndelay(unsigned long ns);
extern void __udelay(unsigned long us);
#define ndelay(ns) __ndelay(ns)
#define udelay(us) __udelay(us)
/* make sure "usecs *= ..." in udelay do not overflow. */
#if HZ >= 1000
#define MAX_UDELAY_MS 1
#elif HZ <= 200
#define MAX_UDELAY_MS 5
#else
#define MAX_UDELAY_MS (1000 / HZ)
#endif
#endif /* _ASM_DELAY_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _LOONGARCH_DMA_DIRECT_H
#define _LOONGARCH_DMA_DIRECT_H
dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr);
phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr);
#endif /* _LOONGARCH_DMA_DIRECT_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_DMI_H
#define _ASM_DMI_H
#include <linux/io.h>
#include <linux/memblock.h>
#define dmi_early_remap(x, l) dmi_remap(x, l)
#define dmi_early_unmap(x, l) dmi_unmap(x)
#define dmi_alloc(l) memblock_alloc(l, PAGE_SIZE)
static inline void *dmi_remap(u64 phys_addr, unsigned long size)
{
return ((void *)TO_CACHE(phys_addr));
}
static inline void dmi_unmap(void *addr)
{
}
#endif /* _ASM_DMI_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_LOONGARCH_EFI_H
#define _ASM_LOONGARCH_EFI_H
#include <linux/efi.h>
void __init efi_init(void);
void __init efi_runtime_init(void);
void efifb_setup_from_dmi(struct screen_info *si, const char *opt);
#define ARCH_EFI_IRQ_FLAGS_MASK 0x00000004 /* Bit 2: CSR.CRMD.IE */
#define arch_efi_call_virt_setup() \
({ \
})
#define arch_efi_call_virt(p, f, args...) \
({ \
efi_##f##_t * __f; \
__f = p->f; \
__f(args); \
})
#define arch_efi_call_virt_teardown() \
({ \
})
#define EFI_ALLOC_ALIGN SZ_64K
struct screen_info *alloc_screen_info(void);
void free_screen_info(struct screen_info *si);
static inline unsigned long efi_get_max_initrd_addr(unsigned long image_addr)
{
return ULONG_MAX;
}
#endif /* _ASM_LOONGARCH_EFI_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_ELF_H
#define _ASM_ELF_H
#include <linux/auxvec.h>
#include <linux/fs.h>
#include <uapi/linux/elf.h>
#include <asm/current.h>
#include <asm/vdso.h>
/* The ABI of a file. */
#define EF_LOONGARCH_ABI_LP64_SOFT_FLOAT 0x1
#define EF_LOONGARCH_ABI_LP64_SINGLE_FLOAT 0x2
#define EF_LOONGARCH_ABI_LP64_DOUBLE_FLOAT 0x3
#define EF_LOONGARCH_ABI_ILP32_SOFT_FLOAT 0x5
#define EF_LOONGARCH_ABI_ILP32_SINGLE_FLOAT 0x6
#define EF_LOONGARCH_ABI_ILP32_DOUBLE_FLOAT 0x7
/* LoongArch relocation types used by the dynamic linker */
#define R_LARCH_NONE 0
#define R_LARCH_32 1
#define R_LARCH_64 2
#define R_LARCH_RELATIVE 3
#define R_LARCH_COPY 4
#define R_LARCH_JUMP_SLOT 5
#define R_LARCH_TLS_DTPMOD32 6
#define R_LARCH_TLS_DTPMOD64 7
#define R_LARCH_TLS_DTPREL32 8
#define R_LARCH_TLS_DTPREL64 9
#define R_LARCH_TLS_TPREL32 10
#define R_LARCH_TLS_TPREL64 11
#define R_LARCH_IRELATIVE 12
#define R_LARCH_MARK_LA 20
#define R_LARCH_MARK_PCREL 21
#define R_LARCH_SOP_PUSH_PCREL 22
#define R_LARCH_SOP_PUSH_ABSOLUTE 23
#define R_LARCH_SOP_PUSH_DUP 24
#define R_LARCH_SOP_PUSH_GPREL 25
#define R_LARCH_SOP_PUSH_TLS_TPREL 26
#define R_LARCH_SOP_PUSH_TLS_GOT 27
#define R_LARCH_SOP_PUSH_TLS_GD 28
#define R_LARCH_SOP_PUSH_PLT_PCREL 29
#define R_LARCH_SOP_ASSERT 30
#define R_LARCH_SOP_NOT 31
#define R_LARCH_SOP_SUB 32
#define R_LARCH_SOP_SL 33
#define R_LARCH_SOP_SR 34
#define R_LARCH_SOP_ADD 35
#define R_LARCH_SOP_AND 36
#define R_LARCH_SOP_IF_ELSE 37
#define R_LARCH_SOP_POP_32_S_10_5 38
#define R_LARCH_SOP_POP_32_U_10_12 39
#define R_LARCH_SOP_POP_32_S_10_12 40
#define R_LARCH_SOP_POP_32_S_10_16 41
#define R_LARCH_SOP_POP_32_S_10_16_S2 42
#define R_LARCH_SOP_POP_32_S_5_20 43
#define R_LARCH_SOP_POP_32_S_0_5_10_16_S2 44
#define R_LARCH_SOP_POP_32_S_0_10_10_16_S2 45
#define R_LARCH_SOP_POP_32_U 46
#define R_LARCH_ADD8 47
#define R_LARCH_ADD16 48
#define R_LARCH_ADD24 49
#define R_LARCH_ADD32 50
#define R_LARCH_ADD64 51
#define R_LARCH_SUB8 52
#define R_LARCH_SUB16 53
#define R_LARCH_SUB24 54
#define R_LARCH_SUB32 55
#define R_LARCH_SUB64 56
#define R_LARCH_GNU_VTINHERIT 57
#define R_LARCH_GNU_VTENTRY 58
#ifndef ELF_ARCH
/* ELF register definitions */
/*
* General purpose have the following registers:
* Register Number
* GPRs 32
* ORIG_A0 1
* ERA 1
* BADVADDR 1
* CRMD 1
* PRMD 1
* EUEN 1
* ECFG 1
* ESTAT 1
* Reserved 5
*/
#define ELF_NGREG 45
/*
* Floating point have the following registers:
* Register Number
* FPR 32
* FCC 1
* FCSR 1
*/
#define ELF_NFPREG 34
typedef unsigned long elf_greg_t;
typedef elf_greg_t elf_gregset_t[ELF_NGREG];
typedef double elf_fpreg_t;
typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
void loongarch_dump_regs64(u64 *uregs, const struct pt_regs *regs);
#ifdef CONFIG_32BIT
/*
* This is used to ensure we don't load something for the wrong architecture.
*/
#define elf_check_arch elf32_check_arch
/*
* These are used to set parameters in the core dumps.
*/
#define ELF_CLASS ELFCLASS32
#define ELF_CORE_COPY_REGS(dest, regs) \
loongarch_dump_regs32((u32 *)&(dest), (regs));
#endif /* CONFIG_32BIT */
#ifdef CONFIG_64BIT
/*
* This is used to ensure we don't load something for the wrong architecture.
*/
#define elf_check_arch elf64_check_arch
/*
* These are used to set parameters in the core dumps.
*/
#define ELF_CLASS ELFCLASS64
#define ELF_CORE_COPY_REGS(dest, regs) \
loongarch_dump_regs64((u64 *)&(dest), (regs));
#endif /* CONFIG_64BIT */
/*
* These are used to set parameters in the core dumps.
*/
#define ELF_DATA ELFDATA2LSB
#define ELF_ARCH EM_LOONGARCH
#endif /* !defined(ELF_ARCH) */
#define loongarch_elf_check_machine(x) ((x)->e_machine == EM_LOONGARCH)
#define vmcore_elf32_check_arch loongarch_elf_check_machine
#define vmcore_elf64_check_arch loongarch_elf_check_machine
/*
* Return non-zero if HDR identifies an 32bit ELF binary.
*/
#define elf32_check_arch(hdr) \
({ \
int __res = 1; \
struct elfhdr *__h = (hdr); \
\
if (!loongarch_elf_check_machine(__h)) \
__res = 0; \
if (__h->e_ident[EI_CLASS] != ELFCLASS32) \
__res = 0; \
\
__res; \
})
/*
* Return non-zero if HDR identifies an 64bit ELF binary.
*/
#define elf64_check_arch(hdr) \
({ \
int __res = 1; \
struct elfhdr *__h = (hdr); \
\
if (!loongarch_elf_check_machine(__h)) \
__res = 0; \
if (__h->e_ident[EI_CLASS] != ELFCLASS64) \
__res = 0; \
\
__res; \
})
#ifdef CONFIG_32BIT
#define SET_PERSONALITY2(ex, state) \
do { \
current->thread.vdso = &vdso_info; \
\
loongarch_set_personality_fcsr(state); \
\
if (personality(current->personality) != PER_LINUX) \
set_personality(PER_LINUX); \
} while (0)
#endif /* CONFIG_32BIT */
#ifdef CONFIG_64BIT
#define SET_PERSONALITY2(ex, state) \
do { \
unsigned int p; \
\
clear_thread_flag(TIF_32BIT_REGS); \
clear_thread_flag(TIF_32BIT_ADDR); \
\
current->thread.vdso = &vdso_info; \
loongarch_set_personality_fcsr(state); \
\
p = personality(current->personality); \
if (p != PER_LINUX32 && p != PER_LINUX) \
set_personality(PER_LINUX); \
} while (0)
#endif /* CONFIG_64BIT */
#define CORE_DUMP_USE_REGSET
#define ELF_EXEC_PAGESIZE PAGE_SIZE
/*
* This yields a mask that user programs can use to figure out what
* instruction set this cpu supports. This could be done in userspace,
* but it's not easy, and we've already done it here.
*/
#define ELF_HWCAP (elf_hwcap)
extern unsigned int elf_hwcap;
#include <asm/hwcap.h>
/*
* This yields a string that ld.so will use to load implementation
* specific libraries for optimization. This is more specific in
* intent than poking at uname or /proc/cpuinfo.
*/
#define ELF_PLATFORM __elf_platform
extern const char *__elf_platform;
#define ELF_PLAT_INIT(_r, load_addr) do { \
_r->regs[1] = _r->regs[2] = _r->regs[3] = _r->regs[4] = 0; \
_r->regs[5] = _r->regs[6] = _r->regs[7] = _r->regs[8] = 0; \
_r->regs[9] = _r->regs[10] = _r->regs[11] = _r->regs[12] = 0; \
_r->regs[13] = _r->regs[14] = _r->regs[15] = _r->regs[16] = 0; \
_r->regs[17] = _r->regs[18] = _r->regs[19] = _r->regs[20] = 0; \
_r->regs[21] = _r->regs[22] = _r->regs[23] = _r->regs[24] = 0; \
_r->regs[25] = _r->regs[26] = _r->regs[27] = _r->regs[28] = 0; \
_r->regs[29] = _r->regs[30] = _r->regs[31] = 0; \
} while (0)
/*
* This is the location that an ET_DYN program is loaded if exec'ed. Typical
* use of this is to invoke "./ld.so someprog" to test out a new version of
* the loader. We need to make sure that it is out of the way of the program
* that it will "exec", and that there is sufficient room for the brk.
*/
#define ELF_ET_DYN_BASE (TASK_SIZE / 3 * 2)
/* update AT_VECTOR_SIZE_ARCH if the number of NEW_AUX_ENT entries changes */
#define ARCH_DLINFO \
do { \
NEW_AUX_ENT(AT_SYSINFO_EHDR, \
(unsigned long)current->mm->context.vdso); \
} while (0)
#define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1
struct linux_binprm;
extern int arch_setup_additional_pages(struct linux_binprm *bprm,
int uses_interp);
struct arch_elf_state {
int fp_abi;
int interp_fp_abi;
};
#define LOONGARCH_ABI_FP_ANY (0)
#define INIT_ARCH_ELF_STATE { \
.fp_abi = LOONGARCH_ABI_FP_ANY, \
.interp_fp_abi = LOONGARCH_ABI_FP_ANY, \
}
#define elf_read_implies_exec(ex, exec_stk) (exec_stk == EXSTACK_DEFAULT)
extern int arch_elf_pt_proc(void *ehdr, void *phdr, struct file *elf,
bool is_interp, struct arch_elf_state *state);
extern int arch_check_elf(void *ehdr, bool has_interpreter, void *interp_ehdr,
struct arch_elf_state *state);
extern void loongarch_set_personality_fcsr(struct arch_elf_state *state);
#endif /* _ASM_ELF_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef ARCH_LOONGARCH_ENTRY_COMMON_H
#define ARCH_LOONGARCH_ENTRY_COMMON_H
#include <linux/sched.h>
#include <linux/processor.h>
static inline bool on_thread_stack(void)
{
return !(((unsigned long)(current->stack) ^ current_stack_pointer) & ~(THREAD_SIZE - 1));
}
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_EXEC_H
#define _ASM_EXEC_H
extern unsigned long arch_align_stack(unsigned long sp);
#endif /* _ASM_EXEC_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_FB_H_
#define _ASM_FB_H_
#include <linux/fb.h>
#include <linux/fs.h>
#include <asm/page.h>
static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
unsigned long off)
{
vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
}
static inline int fb_is_primary_device(struct fb_info *info)
{
return 0;
}
#endif /* _ASM_FB_H_ */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* fixmap.h: compile-time virtual memory allocation
*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_FIXMAP_H
#define _ASM_FIXMAP_H
#define NR_FIX_BTMAPS 64
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Definitions for the FPU register names
*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_FPREGDEF_H
#define _ASM_FPREGDEF_H
#define fa0 $f0 /* argument registers, fa0/fa1 reused as fv0/fv1 for return value */
#define fa1 $f1
#define fa2 $f2
#define fa3 $f3
#define fa4 $f4
#define fa5 $f5
#define fa6 $f6
#define fa7 $f7
#define ft0 $f8 /* caller saved */
#define ft1 $f9
#define ft2 $f10
#define ft3 $f11
#define ft4 $f12
#define ft5 $f13
#define ft6 $f14
#define ft7 $f15
#define ft8 $f16
#define ft9 $f17
#define ft10 $f18
#define ft11 $f19
#define ft12 $f20
#define ft13 $f21
#define ft14 $f22
#define ft15 $f23
#define fs0 $f24 /* callee saved */
#define fs1 $f25
#define fs2 $f26
#define fs3 $f27
#define fs4 $f28
#define fs5 $f29
#define fs6 $f30
#define fs7 $f31
/*
* Current binutils expects *GPRs* at FCSR position for the FCSR
* operation instructions, so define aliases for those used.
*/
#define fcsr0 $r0
#define fcsr1 $r1
#define fcsr2 $r2
#define fcsr3 $r3
#define vcsr16 $r16
#endif /* _ASM_FPREGDEF_H */
This diff is collapsed.
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_HARDIRQ_H
#define _ASM_HARDIRQ_H
#include <linux/cache.h>
#include <linux/threads.h>
#include <linux/irq.h>
extern void ack_bad_irq(unsigned int irq);
#define ack_bad_irq ack_bad_irq
#define NR_IPI 2
typedef struct {
unsigned int ipi_irqs[NR_IPI];
unsigned int __softirq_pending;
} ____cacheline_aligned irq_cpustat_t;
DECLARE_PER_CPU_ALIGNED(irq_cpustat_t, irq_stat);
#define __ARCH_IRQ_STAT
#endif /* _ASM_HARDIRQ_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment