Commit 1ec6d097 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 's390-6.12-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Vasily Gorbik:

 - Optimize ftrace and kprobes code patching and avoid stop machine for
   kprobes if sequential instruction fetching facility is available

 - Add hiperdispatch feature to dynamically adjust CPU capacity in
   vertical polarization to improve scheduling efficiency and overall
   performance. Also add infrastructure for handling warning track
   interrupts (WTI), allowing for graceful CPU preemption

 - Rework crypto code pkey module and split it into separate,
   independent modules for sysfs, PCKMO, CCA, and EP11, allowing modules
   to load only when the relevant hardware is available

 - Add hardware acceleration for HMAC modes and the full AES-XTS cipher,
   utilizing message-security assist extensions (MSA) 10 and 11. It
   introduces new shash implementations for HMAC-SHA224/256/384/512 and
   registers the hardware-accelerated AES-XTS cipher as the preferred
   option. Also add clear key token support

 - Add MSA 10 and 11 processor activity instrumentation counters to perf
   and update PAI Extension 1 NNPA counters

 - Cleanup cpu sampling facility code and rework debug/WARN_ON_ONCE
   statements

 - Add support for SHA3 performance enhancements introduced with MSA 12

 - Add support for the query authentication information feature of MSA
   13 and introduce the KDSA CPACF instruction. Provide query and query
   authentication information in sysfs, enabling tools like cpacfinfo to
   present this data in a human-readable form

 - Update kernel disassembler instructions

 - Always enable EXPOLINE_EXTERN if supported by the compiler to ensure
   kpatch compatibility

 - Add missing warning handling and relocated lowcore support to the
   early program check handler

 - Optimize ftrace_return_address() and avoid calling unwinder

 - Make modules use kernel ftrace trampolines

 - Strip relocs from the final vmlinux ELF file to make it roughly 2
   times smaller

 - Dump register contents and call trace for early crashes to the
   console

 - Generate ptdump address marker array dynamically

 - Fix rcu_sched stalls that might occur when adding or removing large
   amounts of pages at once to or from the CMM balloon

 - Fix deadlock caused by recursive lock of the AP bus scan mutex

 - Unify sync and async register save areas in entry code

 - Cleanup debug prints in crypto code

 - Various cleanup and sanitizing patches for the decompressor

 - Various small ftrace cleanups

* tag 's390-6.12-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (84 commits)
  s390/crypto: Display Query and Query Authentication Information in sysfs
  s390/crypto: Add Support for Query Authentication Information
  s390/crypto: Rework RRE and RRF CPACF inline functions
  s390/crypto: Add KDSA CPACF Instruction
  s390/disassembler: Remove duplicate instruction format RSY_RDRU
  s390/boot: Move boot_printk() code to own file
  s390/boot: Use boot_printk() instead of sclp_early_printk()
  s390/boot: Rename decompressor_printk() to boot_printk()
  s390/boot: Compile all files with the same march flag
  s390: Use MARCH_HAS_*_FEATURES defines
  s390: Provide MARCH_HAS_*_FEATURES defines
  s390/facility: Disable compile time optimization for decompressor code
  s390/boot: Increase minimum architecture to z10
  s390/als: Remove obsolete comment
  s390/sha3: Fix SHA3 selftests failures
  s390/pkey: Add AES xts and HMAC clear key token support
  s390/cpacf: Add MSA 10 and 11 new PCKMO functions
  s390/mm: Add cond_resched() to cmm_alloc/free_pages()
  s390/pai_ext: Update PAI extension 1 counters
  s390/pai_crypto: Add support for MSA 10 and 11 pai counters
  ...
parents 7856a565 9fed8d7c
...@@ -514,6 +514,26 @@ config SCHED_TOPOLOGY ...@@ -514,6 +514,26 @@ config SCHED_TOPOLOGY
making when dealing with machines that have multi-threading, making when dealing with machines that have multi-threading,
multiple cores or multiple books. multiple cores or multiple books.
config SCHED_TOPOLOGY_VERTICAL
def_bool y
bool "Use vertical CPU polarization by default"
depends on SCHED_TOPOLOGY
help
Use vertical CPU polarization by default if available.
The default CPU polarization is horizontal.
config HIPERDISPATCH_ON
def_bool y
bool "Use hiperdispatch on vertical polarization by default"
depends on SCHED_TOPOLOGY
depends on PROC_SYSCTL
help
Hiperdispatch aims to improve the CPU scheduler's decision
making when using vertical polarization by adjusting CPU
capacities dynamically. Set this option to use hiperdispatch
on vertical polarization by default. This can be overwritten
by sysctl's s390.hiperdispatch attribute later on.
source "kernel/Kconfig.hz" source "kernel/Kconfig.hz"
config CERT_STORE config CERT_STORE
...@@ -558,17 +578,13 @@ config EXPOLINE ...@@ -558,17 +578,13 @@ config EXPOLINE
If unsure, say N. If unsure, say N.
config EXPOLINE_EXTERN config EXPOLINE_EXTERN
def_bool y if EXPOLINE def_bool EXPOLINE && CC_IS_GCC && GCC_VERSION >= 110200 && \
depends on EXPOLINE $(success,$(srctree)/arch/s390/tools/gcc-thunk-extern.sh $(CC))
depends on CC_IS_GCC && GCC_VERSION >= 110200
depends on $(success,$(srctree)/arch/s390/tools/gcc-thunk-extern.sh $(CC))
prompt "Generate expolines as extern functions."
help help
This option is required for some tooling like kpatch. The kernel is Generate expolines as external functions if the compiler supports it.
compiled with -mindirect-branch=thunk-extern and requires a newer This option is required for some tooling like kpatch, if expolines
compiler. are enabled. The kernel is compiled with
-mindirect-branch=thunk-extern, which requires a newer compiler.
If unsure, say N.
choice choice
prompt "Expoline default" prompt "Expoline default"
......
# SPDX-License-Identifier: GPL-2.0
# ===========================================================================
# Post-link s390 pass
# ===========================================================================
#
# 1. Separate relocations from vmlinux into relocs.S.
# 2. Strip relocations from vmlinux.
PHONY := __archpost
__archpost:
-include include/config/auto.conf
include $(srctree)/scripts/Kbuild.include
CMD_RELOCS=arch/s390/tools/relocs
OUT_RELOCS = arch/s390/boot
quiet_cmd_relocs = RELOCS $(OUT_RELOCS)/relocs.S
cmd_relocs = \
mkdir -p $(OUT_RELOCS); \
$(CMD_RELOCS) $@ > $(OUT_RELOCS)/relocs.S
quiet_cmd_strip_relocs = RSTRIP $@
cmd_strip_relocs = \
$(OBJCOPY) --remove-section='.rel.*' --remove-section='.rel__*' \
--remove-section='.rela.*' --remove-section='.rela__*' $@
vmlinux: FORCE
$(call cmd,relocs)
$(call cmd,strip_relocs)
clean:
@rm -f $(OUT_RELOCS)/relocs.S
PHONY += FORCE clean
FORCE:
.PHONY: $(PHONY)
...@@ -11,35 +11,23 @@ KASAN_SANITIZE := n ...@@ -11,35 +11,23 @@ KASAN_SANITIZE := n
KCSAN_SANITIZE := n KCSAN_SANITIZE := n
KMSAN_SANITIZE := n KMSAN_SANITIZE := n
KBUILD_AFLAGS := $(KBUILD_AFLAGS_DECOMPRESSOR)
KBUILD_CFLAGS := $(KBUILD_CFLAGS_DECOMPRESSOR)
# #
# Use minimum architecture for als.c to be able to print an error # Use minimum architecture level so it is possible to print an error
# message if the kernel is started on a machine which is too old # message if the kernel is started on a machine which is too old
# #
ifndef CONFIG_CC_IS_CLANG
CC_FLAGS_MARCH_MINIMUM := -march=z900
else
CC_FLAGS_MARCH_MINIMUM := -march=z10 CC_FLAGS_MARCH_MINIMUM := -march=z10
endif
KBUILD_AFLAGS := $(filter-out $(CC_FLAGS_MARCH),$(KBUILD_AFLAGS_DECOMPRESSOR))
ifneq ($(CC_FLAGS_MARCH),$(CC_FLAGS_MARCH_MINIMUM)) KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_MARCH),$(KBUILD_CFLAGS_DECOMPRESSOR))
AFLAGS_REMOVE_head.o += $(CC_FLAGS_MARCH) KBUILD_AFLAGS += $(CC_FLAGS_MARCH_MINIMUM)
AFLAGS_head.o += $(CC_FLAGS_MARCH_MINIMUM) KBUILD_CFLAGS += $(CC_FLAGS_MARCH_MINIMUM)
AFLAGS_REMOVE_mem.o += $(CC_FLAGS_MARCH)
AFLAGS_mem.o += $(CC_FLAGS_MARCH_MINIMUM)
CFLAGS_REMOVE_als.o += $(CC_FLAGS_MARCH)
CFLAGS_als.o += $(CC_FLAGS_MARCH_MINIMUM)
CFLAGS_REMOVE_sclp_early_core.o += $(CC_FLAGS_MARCH)
CFLAGS_sclp_early_core.o += $(CC_FLAGS_MARCH_MINIMUM)
endif
CFLAGS_sclp_early_core.o += -I$(srctree)/drivers/s390/char CFLAGS_sclp_early_core.o += -I$(srctree)/drivers/s390/char
obj-y := head.o als.o startup.o physmem_info.o ipl_parm.o ipl_report.o vmem.o obj-y := head.o als.o startup.o physmem_info.o ipl_parm.o ipl_report.o vmem.o
obj-y += string.o ebcdic.o sclp_early_core.o mem.o ipl_vmparm.o cmdline.o obj-y += string.o ebcdic.o sclp_early_core.o mem.o ipl_vmparm.o cmdline.o
obj-y += version.o pgm_check_info.o ctype.o ipl_data.o relocs.o alternative.o uv.o obj-y += version.o pgm_check_info.o ctype.o ipl_data.o relocs.o alternative.o
obj-y += uv.o printk.o
obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o
obj-y += $(if $(CONFIG_KERNEL_UNCOMPRESSED),,decompressor.o) info.o obj-y += $(if $(CONFIG_KERNEL_UNCOMPRESSED),,decompressor.o) info.o
obj-$(CONFIG_KERNEL_ZSTD) += clz_ctz.o obj-$(CONFIG_KERNEL_ZSTD) += clz_ctz.o
...@@ -109,11 +97,9 @@ OBJCOPYFLAGS_vmlinux.bin := -O binary --remove-section=.comment --remove-section ...@@ -109,11 +97,9 @@ OBJCOPYFLAGS_vmlinux.bin := -O binary --remove-section=.comment --remove-section
$(obj)/vmlinux.bin: vmlinux FORCE $(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy) $(call if_changed,objcopy)
CMD_RELOCS=arch/s390/tools/relocs # relocs.S is created by the vmlinux postlink step.
quiet_cmd_relocs = RELOCS $@ $(obj)/relocs.S: vmlinux
cmd_relocs = $(CMD_RELOCS) $< > $@ @true
$(obj)/relocs.S: vmlinux FORCE
$(call if_changed,relocs)
suffix-$(CONFIG_KERNEL_GZIP) := .gz suffix-$(CONFIG_KERNEL_GZIP) := .gz
suffix-$(CONFIG_KERNEL_BZIP2) := .bz2 suffix-$(CONFIG_KERNEL_BZIP2) := .bz2
......
...@@ -9,42 +9,8 @@ ...@@ -9,42 +9,8 @@
#include <asm/sclp.h> #include <asm/sclp.h>
#include "boot.h" #include "boot.h"
/*
* The code within this file will be called very early. It may _not_
* access anything within the bss section, since that is not cleared
* yet and may contain data (e.g. initrd) that must be saved by other
* code.
* For temporary objects the stack (16k) should be used.
*/
static unsigned long als[] = { FACILITIES_ALS }; static unsigned long als[] = { FACILITIES_ALS };
static void u16_to_hex(char *str, u16 val)
{
int i, num;
for (i = 1; i <= 4; i++) {
num = (val >> (16 - 4 * i)) & 0xf;
if (num >= 10)
num += 7;
*str++ = '0' + num;
}
*str = '\0';
}
static void print_machine_type(void)
{
static char mach_str[80] = "Detected machine-type number: ";
char type_str[5];
struct cpuid id;
get_cpu_id(&id);
u16_to_hex(type_str, id.machine);
strcat(mach_str, type_str);
strcat(mach_str, "\n");
sclp_early_printk(mach_str);
}
static void u16_to_decimal(char *str, u16 val) static void u16_to_decimal(char *str, u16 val)
{ {
int div = 1; int div = 1;
...@@ -80,8 +46,7 @@ void print_missing_facilities(void) ...@@ -80,8 +46,7 @@ void print_missing_facilities(void)
* z/VM adds a four character prefix. * z/VM adds a four character prefix.
*/ */
if (strlen(als_str) > 70) { if (strlen(als_str) > 70) {
strcat(als_str, "\n"); boot_printk("%s\n", als_str);
sclp_early_printk(als_str);
*als_str = '\0'; *als_str = '\0';
} }
u16_to_decimal(val_str, i * BITS_PER_LONG + j); u16_to_decimal(val_str, i * BITS_PER_LONG + j);
...@@ -89,16 +54,18 @@ void print_missing_facilities(void) ...@@ -89,16 +54,18 @@ void print_missing_facilities(void)
first = 0; first = 0;
} }
} }
strcat(als_str, "\n"); boot_printk("%s\n", als_str);
sclp_early_printk(als_str);
} }
static void facility_mismatch(void) static void facility_mismatch(void)
{ {
sclp_early_printk("The Linux kernel requires more recent processor hardware\n"); struct cpuid id;
print_machine_type();
get_cpu_id(&id);
boot_printk("The Linux kernel requires more recent processor hardware\n");
boot_printk("Detected machine-type number: %4x\n", id.machine);
print_missing_facilities(); print_missing_facilities();
sclp_early_printk("See Principles of Operations for facility bits\n"); boot_printk("See Principles of Operations for facility bits\n");
disabled_wait(); disabled_wait();
} }
......
...@@ -70,7 +70,7 @@ void print_pgm_check_info(void); ...@@ -70,7 +70,7 @@ void print_pgm_check_info(void);
unsigned long randomize_within_range(unsigned long size, unsigned long align, unsigned long randomize_within_range(unsigned long size, unsigned long align,
unsigned long min, unsigned long max); unsigned long min, unsigned long max);
void setup_vmem(unsigned long kernel_start, unsigned long kernel_end, unsigned long asce_limit); void setup_vmem(unsigned long kernel_start, unsigned long kernel_end, unsigned long asce_limit);
void __printf(1, 2) decompressor_printk(const char *fmt, ...); void __printf(1, 2) boot_printk(const char *fmt, ...);
void print_stacktrace(unsigned long sp); void print_stacktrace(unsigned long sp);
void error(char *m); void error(char *m);
int get_random(unsigned long limit, unsigned long *value); int get_random(unsigned long limit, unsigned long *value);
......
...@@ -299,11 +299,11 @@ SYM_CODE_END(startup_normal) ...@@ -299,11 +299,11 @@ SYM_CODE_END(startup_normal)
# the save area and does disabled wait with a faulty address. # the save area and does disabled wait with a faulty address.
# #
SYM_CODE_START_LOCAL(startup_pgm_check_handler) SYM_CODE_START_LOCAL(startup_pgm_check_handler)
stmg %r8,%r15,__LC_SAVE_AREA_SYNC stmg %r8,%r15,__LC_SAVE_AREA
la %r8,4095 la %r8,4095
stctg %c0,%c15,__LC_CREGS_SAVE_AREA-4095(%r8) stctg %c0,%c15,__LC_CREGS_SAVE_AREA-4095(%r8)
stmg %r0,%r7,__LC_GPREGS_SAVE_AREA-4095(%r8) stmg %r0,%r7,__LC_GPREGS_SAVE_AREA-4095(%r8)
mvc __LC_GPREGS_SAVE_AREA-4095+64(64,%r8),__LC_SAVE_AREA_SYNC mvc __LC_GPREGS_SAVE_AREA-4095+64(64,%r8),__LC_SAVE_AREA
mvc __LC_PSW_SAVE_AREA-4095(16,%r8),__LC_PGM_OLD_PSW mvc __LC_PSW_SAVE_AREA-4095(16,%r8),__LC_PGM_OLD_PSW
mvc __LC_RETURN_PSW(16),__LC_PGM_OLD_PSW mvc __LC_RETURN_PSW(16),__LC_PGM_OLD_PSW
ni __LC_RETURN_PSW,0xfc # remove IO and EX bits ni __LC_RETURN_PSW,0xfc # remove IO and EX bits
......
...@@ -215,7 +215,7 @@ static void check_cleared_facilities(void) ...@@ -215,7 +215,7 @@ static void check_cleared_facilities(void)
for (i = 0; i < ARRAY_SIZE(als); i++) { for (i = 0; i < ARRAY_SIZE(als); i++) {
if ((stfle_fac_list[i] & als[i]) != als[i]) { if ((stfle_fac_list[i] & als[i]) != als[i]) {
sclp_early_printk("Warning: The Linux kernel requires facilities cleared via command line option\n"); boot_printk("Warning: The Linux kernel requires facilities cleared via command line option\n");
print_missing_facilities(); print_missing_facilities();
break; break;
} }
......
...@@ -32,7 +32,7 @@ struct prng_parm { ...@@ -32,7 +32,7 @@ struct prng_parm {
static int check_prng(void) static int check_prng(void)
{ {
if (!cpacf_query_func(CPACF_KMC, CPACF_KMC_PRNG)) { if (!cpacf_query_func(CPACF_KMC, CPACF_KMC_PRNG)) {
sclp_early_printk("KASLR disabled: CPU has no PRNG\n"); boot_printk("KASLR disabled: CPU has no PRNG\n");
return 0; return 0;
} }
if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG)) if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG))
......
...@@ -11,131 +11,19 @@ ...@@ -11,131 +11,19 @@
#include <asm/uv.h> #include <asm/uv.h>
#include "boot.h" #include "boot.h"
const char hex_asc[] = "0123456789abcdef";
static char *as_hex(char *dst, unsigned long val, int pad)
{
char *p, *end = p = dst + max(pad, (int)__fls(val | 1) / 4 + 1);
for (*p-- = 0; p >= dst; val >>= 4)
*p-- = hex_asc[val & 0x0f];
return end;
}
static char *symstart(char *p)
{
while (*p)
p--;
return p + 1;
}
static noinline char *findsym(unsigned long ip, unsigned short *off, unsigned short *len)
{
/* symbol entries are in a form "10000 c4 startup\0" */
char *a = _decompressor_syms_start;
char *b = _decompressor_syms_end;
unsigned long start;
unsigned long size;
char *pivot;
char *endp;
while (a < b) {
pivot = symstart(a + (b - a) / 2);
start = simple_strtoull(pivot, &endp, 16);
size = simple_strtoull(endp + 1, &endp, 16);
if (ip < start) {
b = pivot;
continue;
}
if (ip > start + size) {
a = pivot + strlen(pivot) + 1;
continue;
}
*off = ip - start;
*len = size;
return endp + 1;
}
return NULL;
}
static noinline char *strsym(void *ip)
{
static char buf[64];
unsigned short off;
unsigned short len;
char *p;
p = findsym((unsigned long)ip, &off, &len);
if (p) {
strncpy(buf, p, sizeof(buf));
/* reserve 15 bytes for offset/len in symbol+0x1234/0x1234 */
p = buf + strnlen(buf, sizeof(buf) - 15);
strcpy(p, "+0x");
p = as_hex(p + 3, off, 0);
strcpy(p, "/0x");
as_hex(p + 3, len, 0);
} else {
as_hex(buf, (unsigned long)ip, 16);
}
return buf;
}
void decompressor_printk(const char *fmt, ...)
{
char buf[1024] = { 0 };
char *end = buf + sizeof(buf) - 1; /* make sure buf is 0 terminated */
unsigned long pad;
char *p = buf;
va_list args;
va_start(args, fmt);
for (; p < end && *fmt; fmt++) {
if (*fmt != '%') {
*p++ = *fmt;
continue;
}
pad = isdigit(*++fmt) ? simple_strtol(fmt, (char **)&fmt, 10) : 0;
switch (*fmt) {
case 's':
p = buf + strlcat(buf, va_arg(args, char *), sizeof(buf));
break;
case 'p':
if (*++fmt != 'S')
goto out;
p = buf + strlcat(buf, strsym(va_arg(args, void *)), sizeof(buf));
break;
case 'l':
if (*++fmt != 'x' || end - p <= max(sizeof(long) * 2, pad))
goto out;
p = as_hex(p, va_arg(args, unsigned long), pad);
break;
case 'x':
if (end - p <= max(sizeof(int) * 2, pad))
goto out;
p = as_hex(p, va_arg(args, unsigned int), pad);
break;
default:
goto out;
}
}
out:
va_end(args);
sclp_early_printk(buf);
}
void print_stacktrace(unsigned long sp) void print_stacktrace(unsigned long sp)
{ {
struct stack_info boot_stack = { STACK_TYPE_TASK, (unsigned long)_stack_start, struct stack_info boot_stack = { STACK_TYPE_TASK, (unsigned long)_stack_start,
(unsigned long)_stack_end }; (unsigned long)_stack_end };
bool first = true; bool first = true;
decompressor_printk("Call Trace:\n"); boot_printk("Call Trace:\n");
while (!(sp & 0x7) && on_stack(&boot_stack, sp, sizeof(struct stack_frame))) { while (!(sp & 0x7) && on_stack(&boot_stack, sp, sizeof(struct stack_frame))) {
struct stack_frame *sf = (struct stack_frame *)sp; struct stack_frame *sf = (struct stack_frame *)sp;
decompressor_printk(first ? "(sp:%016lx [<%016lx>] %pS)\n" : boot_printk(first ? "(sp:%016lx [<%016lx>] %pS)\n" :
" sp:%016lx [<%016lx>] %pS\n", " sp:%016lx [<%016lx>] %pS\n",
sp, sf->gprs[8], (void *)sf->gprs[8]); sp, sf->gprs[8], (void *)sf->gprs[8]);
if (sf->back_chain <= sp) if (sf->back_chain <= sp)
break; break;
sp = sf->back_chain; sp = sf->back_chain;
...@@ -148,34 +36,30 @@ void print_pgm_check_info(void) ...@@ -148,34 +36,30 @@ void print_pgm_check_info(void)
unsigned long *gpregs = (unsigned long *)get_lowcore()->gpregs_save_area; unsigned long *gpregs = (unsigned long *)get_lowcore()->gpregs_save_area;
struct psw_bits *psw = &psw_bits(get_lowcore()->psw_save_area); struct psw_bits *psw = &psw_bits(get_lowcore()->psw_save_area);
decompressor_printk("Linux version %s\n", kernel_version); boot_printk("Linux version %s\n", kernel_version);
if (!is_prot_virt_guest() && early_command_line[0]) if (!is_prot_virt_guest() && early_command_line[0])
decompressor_printk("Kernel command line: %s\n", early_command_line); boot_printk("Kernel command line: %s\n", early_command_line);
decompressor_printk("Kernel fault: interruption code %04x ilc:%x\n", boot_printk("Kernel fault: interruption code %04x ilc:%x\n",
get_lowcore()->pgm_code, get_lowcore()->pgm_ilc >> 1); get_lowcore()->pgm_code, get_lowcore()->pgm_ilc >> 1);
if (kaslr_enabled()) { if (kaslr_enabled()) {
decompressor_printk("Kernel random base: %lx\n", __kaslr_offset); boot_printk("Kernel random base: %lx\n", __kaslr_offset);
decompressor_printk("Kernel random base phys: %lx\n", __kaslr_offset_phys); boot_printk("Kernel random base phys: %lx\n", __kaslr_offset_phys);
} }
decompressor_printk("PSW : %016lx %016lx (%pS)\n", boot_printk("PSW : %016lx %016lx (%pS)\n",
get_lowcore()->psw_save_area.mask, get_lowcore()->psw_save_area.mask,
get_lowcore()->psw_save_area.addr, get_lowcore()->psw_save_area.addr,
(void *)get_lowcore()->psw_save_area.addr); (void *)get_lowcore()->psw_save_area.addr);
decompressor_printk( boot_printk(
" R:%x T:%x IO:%x EX:%x Key:%x M:%x W:%x P:%x AS:%x CC:%x PM:%x RI:%x EA:%x\n", " R:%x T:%x IO:%x EX:%x Key:%x M:%x W:%x P:%x AS:%x CC:%x PM:%x RI:%x EA:%x\n",
psw->per, psw->dat, psw->io, psw->ext, psw->key, psw->mcheck, psw->per, psw->dat, psw->io, psw->ext, psw->key, psw->mcheck,
psw->wait, psw->pstate, psw->as, psw->cc, psw->pm, psw->ri, psw->wait, psw->pstate, psw->as, psw->cc, psw->pm, psw->ri,
psw->eaba); psw->eaba);
decompressor_printk("GPRS: %016lx %016lx %016lx %016lx\n", boot_printk("GPRS: %016lx %016lx %016lx %016lx\n", gpregs[0], gpregs[1], gpregs[2], gpregs[3]);
gpregs[0], gpregs[1], gpregs[2], gpregs[3]); boot_printk(" %016lx %016lx %016lx %016lx\n", gpregs[4], gpregs[5], gpregs[6], gpregs[7]);
decompressor_printk(" %016lx %016lx %016lx %016lx\n", boot_printk(" %016lx %016lx %016lx %016lx\n", gpregs[8], gpregs[9], gpregs[10], gpregs[11]);
gpregs[4], gpregs[5], gpregs[6], gpregs[7]); boot_printk(" %016lx %016lx %016lx %016lx\n", gpregs[12], gpregs[13], gpregs[14], gpregs[15]);
decompressor_printk(" %016lx %016lx %016lx %016lx\n",
gpregs[8], gpregs[9], gpregs[10], gpregs[11]);
decompressor_printk(" %016lx %016lx %016lx %016lx\n",
gpregs[12], gpregs[13], gpregs[14], gpregs[15]);
print_stacktrace(get_lowcore()->gpregs_save_area[15]); print_stacktrace(get_lowcore()->gpregs_save_area[15]);
decompressor_printk("Last Breaking-Event-Address:\n"); boot_printk("Last Breaking-Event-Address:\n");
decompressor_printk(" [<%016lx>] %pS\n", (unsigned long)get_lowcore()->pgm_last_break, boot_printk(" [<%016lx>] %pS\n", (unsigned long)get_lowcore()->pgm_last_break,
(void *)get_lowcore()->pgm_last_break); (void *)get_lowcore()->pgm_last_break);
} }
...@@ -190,27 +190,27 @@ static void die_oom(unsigned long size, unsigned long align, unsigned long min, ...@@ -190,27 +190,27 @@ static void die_oom(unsigned long size, unsigned long align, unsigned long min,
enum reserved_range_type t; enum reserved_range_type t;
int i; int i;
decompressor_printk("Linux version %s\n", kernel_version); boot_printk("Linux version %s\n", kernel_version);
if (!is_prot_virt_guest() && early_command_line[0]) if (!is_prot_virt_guest() && early_command_line[0])
decompressor_printk("Kernel command line: %s\n", early_command_line); boot_printk("Kernel command line: %s\n", early_command_line);
decompressor_printk("Out of memory allocating %lx bytes %lx aligned in range %lx:%lx\n", boot_printk("Out of memory allocating %lx bytes %lx aligned in range %lx:%lx\n",
size, align, min, max); size, align, min, max);
decompressor_printk("Reserved memory ranges:\n"); boot_printk("Reserved memory ranges:\n");
for_each_physmem_reserved_range(t, range, &start, &end) { for_each_physmem_reserved_range(t, range, &start, &end) {
decompressor_printk("%016lx %016lx %s\n", start, end, get_rr_type_name(t)); boot_printk("%016lx %016lx %s\n", start, end, get_rr_type_name(t));
total_reserved_mem += end - start; total_reserved_mem += end - start;
} }
decompressor_printk("Usable online memory ranges (info source: %s [%x]):\n", boot_printk("Usable online memory ranges (info source: %s [%x]):\n",
get_physmem_info_source(), physmem_info.info_source); get_physmem_info_source(), physmem_info.info_source);
for_each_physmem_usable_range(i, &start, &end) { for_each_physmem_usable_range(i, &start, &end) {
decompressor_printk("%016lx %016lx\n", start, end); boot_printk("%016lx %016lx\n", start, end);
total_mem += end - start; total_mem += end - start;
} }
decompressor_printk("Usable online memory total: %lx Reserved: %lx Free: %lx\n", boot_printk("Usable online memory total: %lx Reserved: %lx Free: %lx\n",
total_mem, total_reserved_mem, total_mem, total_reserved_mem,
total_mem > total_reserved_mem ? total_mem - total_reserved_mem : 0); total_mem > total_reserved_mem ? total_mem - total_reserved_mem : 0);
print_stacktrace(current_frame_address()); print_stacktrace(current_frame_address());
sclp_early_printk("\n\n -- System halted\n"); boot_printk("\n\n -- System halted\n");
disabled_wait(); disabled_wait();
} }
......
// SPDX-License-Identifier: GPL-2.0
#include <linux/kernel.h>
#include <linux/stdarg.h>
#include <linux/string.h>
#include <linux/ctype.h>
#include <asm/stacktrace.h>
#include <asm/boot_data.h>
#include <asm/lowcore.h>
#include <asm/setup.h>
#include <asm/sclp.h>
#include <asm/uv.h>
#include "boot.h"
const char hex_asc[] = "0123456789abcdef";
static char *as_hex(char *dst, unsigned long val, int pad)
{
char *p, *end = p = dst + max(pad, (int)__fls(val | 1) / 4 + 1);
for (*p-- = 0; p >= dst; val >>= 4)
*p-- = hex_asc[val & 0x0f];
return end;
}
static char *symstart(char *p)
{
while (*p)
p--;
return p + 1;
}
static noinline char *findsym(unsigned long ip, unsigned short *off, unsigned short *len)
{
/* symbol entries are in a form "10000 c4 startup\0" */
char *a = _decompressor_syms_start;
char *b = _decompressor_syms_end;
unsigned long start;
unsigned long size;
char *pivot;
char *endp;
while (a < b) {
pivot = symstart(a + (b - a) / 2);
start = simple_strtoull(pivot, &endp, 16);
size = simple_strtoull(endp + 1, &endp, 16);
if (ip < start) {
b = pivot;
continue;
}
if (ip > start + size) {
a = pivot + strlen(pivot) + 1;
continue;
}
*off = ip - start;
*len = size;
return endp + 1;
}
return NULL;
}
static noinline char *strsym(void *ip)
{
static char buf[64];
unsigned short off;
unsigned short len;
char *p;
p = findsym((unsigned long)ip, &off, &len);
if (p) {
strncpy(buf, p, sizeof(buf));
/* reserve 15 bytes for offset/len in symbol+0x1234/0x1234 */
p = buf + strnlen(buf, sizeof(buf) - 15);
strcpy(p, "+0x");
p = as_hex(p + 3, off, 0);
strcpy(p, "/0x");
as_hex(p + 3, len, 0);
} else {
as_hex(buf, (unsigned long)ip, 16);
}
return buf;
}
void boot_printk(const char *fmt, ...)
{
char buf[1024] = { 0 };
char *end = buf + sizeof(buf) - 1; /* make sure buf is 0 terminated */
unsigned long pad;
char *p = buf;
va_list args;
va_start(args, fmt);
for (; p < end && *fmt; fmt++) {
if (*fmt != '%') {
*p++ = *fmt;
continue;
}
pad = isdigit(*++fmt) ? simple_strtol(fmt, (char **)&fmt, 10) : 0;
switch (*fmt) {
case 's':
p = buf + strlcat(buf, va_arg(args, char *), sizeof(buf));
break;
case 'p':
if (*++fmt != 'S')
goto out;
p = buf + strlcat(buf, strsym(va_arg(args, void *)), sizeof(buf));
break;
case 'l':
if (*++fmt != 'x' || end - p <= max(sizeof(long) * 2, pad))
goto out;
p = as_hex(p, va_arg(args, unsigned long), pad);
break;
case 'x':
if (end - p <= max(sizeof(int) * 2, pad))
goto out;
p = as_hex(p, va_arg(args, unsigned int), pad);
break;
default:
goto out;
}
}
out:
va_end(args);
sclp_early_printk(buf);
}
...@@ -39,10 +39,7 @@ struct machine_info machine; ...@@ -39,10 +39,7 @@ struct machine_info machine;
void error(char *x) void error(char *x)
{ {
sclp_early_printk("\n\n"); boot_printk("\n\n%s\n\n -- System halted", x);
sclp_early_printk(x);
sclp_early_printk("\n\n -- System halted");
disabled_wait(); disabled_wait();
} }
...@@ -296,7 +293,7 @@ static unsigned long setup_kernel_memory_layout(unsigned long kernel_size) ...@@ -296,7 +293,7 @@ static unsigned long setup_kernel_memory_layout(unsigned long kernel_size)
kernel_start = round_down(kernel_end - kernel_size, THREAD_SIZE); kernel_start = round_down(kernel_end - kernel_size, THREAD_SIZE);
} else if (vmax < __NO_KASLR_END_KERNEL || vsize > __NO_KASLR_END_KERNEL) { } else if (vmax < __NO_KASLR_END_KERNEL || vsize > __NO_KASLR_END_KERNEL) {
kernel_start = round_down(vmax - kernel_size, THREAD_SIZE); kernel_start = round_down(vmax - kernel_size, THREAD_SIZE);
decompressor_printk("The kernel base address is forced to %lx\n", kernel_start); boot_printk("The kernel base address is forced to %lx\n", kernel_start);
} else { } else {
kernel_start = __NO_KASLR_START_KERNEL; kernel_start = __NO_KASLR_START_KERNEL;
} }
......
...@@ -794,8 +794,12 @@ CONFIG_CRYPTO_GHASH_S390=m ...@@ -794,8 +794,12 @@ CONFIG_CRYPTO_GHASH_S390=m
CONFIG_CRYPTO_AES_S390=m CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_DES_S390=m CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_CHACHA_S390=m CONFIG_CRYPTO_CHACHA_S390=m
CONFIG_CRYPTO_HMAC_S390=m
CONFIG_ZCRYPT=m CONFIG_ZCRYPT=m
CONFIG_PKEY=m CONFIG_PKEY=m
CONFIG_PKEY_CCA=m
CONFIG_PKEY_EP11=m
CONFIG_PKEY_PCKMO=m
CONFIG_CRYPTO_PAES_S390=m CONFIG_CRYPTO_PAES_S390=m
CONFIG_CRYPTO_DEV_VIRTIO=m CONFIG_CRYPTO_DEV_VIRTIO=m
CONFIG_SYSTEM_BLACKLIST_KEYRING=y CONFIG_SYSTEM_BLACKLIST_KEYRING=y
......
...@@ -781,8 +781,12 @@ CONFIG_CRYPTO_GHASH_S390=m ...@@ -781,8 +781,12 @@ CONFIG_CRYPTO_GHASH_S390=m
CONFIG_CRYPTO_AES_S390=m CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_DES_S390=m CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_CHACHA_S390=m CONFIG_CRYPTO_CHACHA_S390=m
CONFIG_CRYPTO_HMAC_S390=m
CONFIG_ZCRYPT=m CONFIG_ZCRYPT=m
CONFIG_PKEY=m CONFIG_PKEY=m
CONFIG_PKEY_CCA=m
CONFIG_PKEY_EP11=m
CONFIG_PKEY_PCKMO=m
CONFIG_CRYPTO_PAES_S390=m CONFIG_CRYPTO_PAES_S390=m
CONFIG_CRYPTO_DEV_VIRTIO=m CONFIG_CRYPTO_DEV_VIRTIO=m
CONFIG_SYSTEM_BLACKLIST_KEYRING=y CONFIG_SYSTEM_BLACKLIST_KEYRING=y
......
...@@ -132,4 +132,14 @@ config CRYPTO_CHACHA_S390 ...@@ -132,4 +132,14 @@ config CRYPTO_CHACHA_S390
It is available as of z13. It is available as of z13.
config CRYPTO_HMAC_S390
tristate "Keyed-hash message authentication code: HMAC"
depends on S390
select CRYPTO_HASH
help
s390 specific HMAC hardware support for SHA224, SHA256, SHA384 and
SHA512.
Architecture: s390
endmenu endmenu
...@@ -15,6 +15,7 @@ obj-$(CONFIG_CRYPTO_CHACHA_S390) += chacha_s390.o ...@@ -15,6 +15,7 @@ obj-$(CONFIG_CRYPTO_CHACHA_S390) += chacha_s390.o
obj-$(CONFIG_S390_PRNG) += prng.o obj-$(CONFIG_S390_PRNG) += prng.o
obj-$(CONFIG_CRYPTO_GHASH_S390) += ghash_s390.o obj-$(CONFIG_CRYPTO_GHASH_S390) += ghash_s390.o
obj-$(CONFIG_CRYPTO_CRC32_S390) += crc32-vx_s390.o obj-$(CONFIG_CRYPTO_CRC32_S390) += crc32-vx_s390.o
obj-$(CONFIG_CRYPTO_HMAC_S390) += hmac_s390.o
obj-y += arch_random.o obj-y += arch_random.o
crc32-vx_s390-y := crc32-vx.o crc32le-vx.o crc32be-vx.o crc32-vx_s390-y := crc32-vx.o crc32le-vx.o crc32be-vx.o
......
...@@ -51,8 +51,13 @@ struct s390_aes_ctx { ...@@ -51,8 +51,13 @@ struct s390_aes_ctx {
}; };
struct s390_xts_ctx { struct s390_xts_ctx {
u8 key[32]; union {
u8 pcc_key[32]; u8 keys[64];
struct {
u8 key[32];
u8 pcc_key[32];
};
};
int key_len; int key_len;
unsigned long fc; unsigned long fc;
struct crypto_skcipher *fallback; struct crypto_skcipher *fallback;
...@@ -526,6 +531,108 @@ static struct skcipher_alg xts_aes_alg = { ...@@ -526,6 +531,108 @@ static struct skcipher_alg xts_aes_alg = {
.decrypt = xts_aes_decrypt, .decrypt = xts_aes_decrypt,
}; };
static int fullxts_aes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
struct s390_xts_ctx *xts_ctx = crypto_skcipher_ctx(tfm);
unsigned long fc;
int err;
err = xts_fallback_setkey(tfm, in_key, key_len);
if (err)
return err;
/* Pick the correct function code based on the key length */
fc = (key_len == 32) ? CPACF_KM_XTS_128_FULL :
(key_len == 64) ? CPACF_KM_XTS_256_FULL : 0;
/* Check if the function code is available */
xts_ctx->fc = (fc && cpacf_test_func(&km_functions, fc)) ? fc : 0;
if (!xts_ctx->fc)
return 0;
/* Store double-key */
memcpy(xts_ctx->keys, in_key, key_len);
xts_ctx->key_len = key_len;
return 0;
}
static int fullxts_aes_crypt(struct skcipher_request *req, unsigned long modifier)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct s390_xts_ctx *xts_ctx = crypto_skcipher_ctx(tfm);
unsigned int offset, nbytes, n;
struct skcipher_walk walk;
int ret;
struct {
__u8 key[64];
__u8 tweak[16];
__u8 nap[16];
} fxts_param = {
.nap = {0},
};
if (req->cryptlen < AES_BLOCK_SIZE)
return -EINVAL;
if (unlikely(!xts_ctx->fc || (req->cryptlen % AES_BLOCK_SIZE) != 0)) {
struct skcipher_request *subreq = skcipher_request_ctx(req);
*subreq = *req;
skcipher_request_set_tfm(subreq, xts_ctx->fallback);
return (modifier & CPACF_DECRYPT) ?
crypto_skcipher_decrypt(subreq) :
crypto_skcipher_encrypt(subreq);
}
ret = skcipher_walk_virt(&walk, req, false);
if (ret)
return ret;
offset = xts_ctx->key_len & 0x20;
memcpy(fxts_param.key + offset, xts_ctx->keys, xts_ctx->key_len);
memcpy(fxts_param.tweak, req->iv, AES_BLOCK_SIZE);
fxts_param.nap[0] = 0x01; /* initial alpha power (1, little-endian) */
while ((nbytes = walk.nbytes) != 0) {
/* only use complete blocks */
n = nbytes & ~(AES_BLOCK_SIZE - 1);
cpacf_km(xts_ctx->fc | modifier, fxts_param.key + offset,
walk.dst.virt.addr, walk.src.virt.addr, n);
ret = skcipher_walk_done(&walk, nbytes - n);
}
memzero_explicit(&fxts_param, sizeof(fxts_param));
return ret;
}
static int fullxts_aes_encrypt(struct skcipher_request *req)
{
return fullxts_aes_crypt(req, 0);
}
static int fullxts_aes_decrypt(struct skcipher_request *req)
{
return fullxts_aes_crypt(req, CPACF_DECRYPT);
}
static struct skcipher_alg fullxts_aes_alg = {
.base.cra_name = "xts(aes)",
.base.cra_driver_name = "full-xts-aes-s390",
.base.cra_priority = 403, /* aes-xts-s390 + 1 */
.base.cra_flags = CRYPTO_ALG_NEED_FALLBACK,
.base.cra_blocksize = AES_BLOCK_SIZE,
.base.cra_ctxsize = sizeof(struct s390_xts_ctx),
.base.cra_module = THIS_MODULE,
.init = xts_fallback_init,
.exit = xts_fallback_exit,
.min_keysize = 2 * AES_MIN_KEY_SIZE,
.max_keysize = 2 * AES_MAX_KEY_SIZE,
.ivsize = AES_BLOCK_SIZE,
.setkey = fullxts_aes_set_key,
.encrypt = fullxts_aes_encrypt,
.decrypt = fullxts_aes_decrypt,
};
static int ctr_aes_set_key(struct crypto_skcipher *tfm, const u8 *in_key, static int ctr_aes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len) unsigned int key_len)
{ {
...@@ -955,7 +1062,7 @@ static struct aead_alg gcm_aes_aead = { ...@@ -955,7 +1062,7 @@ static struct aead_alg gcm_aes_aead = {
}; };
static struct crypto_alg *aes_s390_alg; static struct crypto_alg *aes_s390_alg;
static struct skcipher_alg *aes_s390_skcipher_algs[4]; static struct skcipher_alg *aes_s390_skcipher_algs[5];
static int aes_s390_skciphers_num; static int aes_s390_skciphers_num;
static struct aead_alg *aes_s390_aead_alg; static struct aead_alg *aes_s390_aead_alg;
...@@ -1012,6 +1119,13 @@ static int __init aes_s390_init(void) ...@@ -1012,6 +1119,13 @@ static int __init aes_s390_init(void)
goto out_err; goto out_err;
} }
if (cpacf_test_func(&km_functions, CPACF_KM_XTS_128_FULL) ||
cpacf_test_func(&km_functions, CPACF_KM_XTS_256_FULL)) {
ret = aes_s390_register_skcipher(&fullxts_aes_alg);
if (ret)
goto out_err;
}
if (cpacf_test_func(&km_functions, CPACF_KM_XTS_128) || if (cpacf_test_func(&km_functions, CPACF_KM_XTS_128) ||
cpacf_test_func(&km_functions, CPACF_KM_XTS_256)) { cpacf_test_func(&km_functions, CPACF_KM_XTS_256)) {
ret = aes_s390_register_skcipher(&xts_aes_alg); ret = aes_s390_register_skcipher(&xts_aes_alg);
......
// SPDX-License-Identifier: GPL-2.0+
/*
* Copyright IBM Corp. 2024
*
* s390 specific HMAC support.
*/
#define KMSG_COMPONENT "hmac_s390"
#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
#include <asm/cpacf.h>
#include <crypto/sha2.h>
#include <crypto/internal/hash.h>
#include <linux/cpufeature.h>
#include <linux/module.h>
/*
* KMAC param block layout for sha2 function codes:
* The layout of the param block for the KMAC instruction depends on the
* blocksize of the used hashing sha2-algorithm function codes. The param block
* contains the hash chaining value (cv), the input message bit-length (imbl)
* and the hmac-secret (key). To prevent code duplication, the sizes of all
* these are calculated based on the blocksize.
*
* param-block:
* +-------+
* | cv |
* +-------+
* | imbl |
* +-------+
* | key |
* +-------+
*
* sizes:
* part | sh2-alg | calculation | size | type
* -----+---------+-------------+------+--------
* cv | 224/256 | blocksize/2 | 32 | u64[8]
* | 384/512 | | 64 | u128[8]
* imbl | 224/256 | blocksize/8 | 8 | u64
* | 384/512 | | 16 | u128
* key | 224/256 | blocksize | 64 | u8[64]
* | 384/512 | | 128 | u8[128]
*/
#define MAX_DIGEST_SIZE SHA512_DIGEST_SIZE
#define MAX_IMBL_SIZE sizeof(u128)
#define MAX_BLOCK_SIZE SHA512_BLOCK_SIZE
#define SHA2_CV_SIZE(bs) ((bs) >> 1)
#define SHA2_IMBL_SIZE(bs) ((bs) >> 3)
#define SHA2_IMBL_OFFSET(bs) (SHA2_CV_SIZE(bs))
#define SHA2_KEY_OFFSET(bs) (SHA2_CV_SIZE(bs) + SHA2_IMBL_SIZE(bs))
struct s390_hmac_ctx {
u8 key[MAX_BLOCK_SIZE];
};
union s390_kmac_gr0 {
unsigned long reg;
struct {
unsigned long : 48;
unsigned long ikp : 1;
unsigned long iimp : 1;
unsigned long ccup : 1;
unsigned long : 6;
unsigned long fc : 7;
};
};
struct s390_kmac_sha2_ctx {
u8 param[MAX_DIGEST_SIZE + MAX_IMBL_SIZE + MAX_BLOCK_SIZE];
union s390_kmac_gr0 gr0;
u8 buf[MAX_BLOCK_SIZE];
unsigned int buflen;
};
/*
* kmac_sha2_set_imbl - sets the input message bit-length based on the blocksize
*/
static inline void kmac_sha2_set_imbl(u8 *param, unsigned int buflen,
unsigned int blocksize)
{
u8 *imbl = param + SHA2_IMBL_OFFSET(blocksize);
switch (blocksize) {
case SHA256_BLOCK_SIZE:
*(u64 *)imbl = (u64)buflen * BITS_PER_BYTE;
break;
case SHA512_BLOCK_SIZE:
*(u128 *)imbl = (u128)buflen * BITS_PER_BYTE;
break;
default:
break;
}
}
static int hash_key(const u8 *in, unsigned int inlen,
u8 *digest, unsigned int digestsize)
{
unsigned long func;
union {
struct sha256_paramblock {
u32 h[8];
u64 mbl;
} sha256;
struct sha512_paramblock {
u64 h[8];
u128 mbl;
} sha512;
} __packed param;
#define PARAM_INIT(x, y, z) \
param.sha##x.h[0] = SHA##y ## _H0; \
param.sha##x.h[1] = SHA##y ## _H1; \
param.sha##x.h[2] = SHA##y ## _H2; \
param.sha##x.h[3] = SHA##y ## _H3; \
param.sha##x.h[4] = SHA##y ## _H4; \
param.sha##x.h[5] = SHA##y ## _H5; \
param.sha##x.h[6] = SHA##y ## _H6; \
param.sha##x.h[7] = SHA##y ## _H7; \
param.sha##x.mbl = (z)
switch (digestsize) {
case SHA224_DIGEST_SIZE:
func = CPACF_KLMD_SHA_256;
PARAM_INIT(256, 224, inlen * 8);
break;
case SHA256_DIGEST_SIZE:
func = CPACF_KLMD_SHA_256;
PARAM_INIT(256, 256, inlen * 8);
break;
case SHA384_DIGEST_SIZE:
func = CPACF_KLMD_SHA_512;
PARAM_INIT(512, 384, inlen * 8);
break;
case SHA512_DIGEST_SIZE:
func = CPACF_KLMD_SHA_512;
PARAM_INIT(512, 512, inlen * 8);
break;
default:
return -EINVAL;
}
#undef PARAM_INIT
cpacf_klmd(func, &param, in, inlen);
memcpy(digest, &param, digestsize);
return 0;
}
static int s390_hmac_sha2_setkey(struct crypto_shash *tfm,
const u8 *key, unsigned int keylen)
{
struct s390_hmac_ctx *tfm_ctx = crypto_shash_ctx(tfm);
unsigned int ds = crypto_shash_digestsize(tfm);
unsigned int bs = crypto_shash_blocksize(tfm);
memset(tfm_ctx, 0, sizeof(*tfm_ctx));
if (keylen > bs)
return hash_key(key, keylen, tfm_ctx->key, ds);
memcpy(tfm_ctx->key, key, keylen);
return 0;
}
static int s390_hmac_sha2_init(struct shash_desc *desc)
{
struct s390_hmac_ctx *tfm_ctx = crypto_shash_ctx(desc->tfm);
struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
unsigned int bs = crypto_shash_blocksize(desc->tfm);
memcpy(ctx->param + SHA2_KEY_OFFSET(bs),
tfm_ctx->key, bs);
ctx->buflen = 0;
ctx->gr0.reg = 0;
switch (crypto_shash_digestsize(desc->tfm)) {
case SHA224_DIGEST_SIZE:
ctx->gr0.fc = CPACF_KMAC_HMAC_SHA_224;
break;
case SHA256_DIGEST_SIZE:
ctx->gr0.fc = CPACF_KMAC_HMAC_SHA_256;
break;
case SHA384_DIGEST_SIZE:
ctx->gr0.fc = CPACF_KMAC_HMAC_SHA_384;
break;
case SHA512_DIGEST_SIZE:
ctx->gr0.fc = CPACF_KMAC_HMAC_SHA_512;
break;
default:
return -EINVAL;
}
return 0;
}
static int s390_hmac_sha2_update(struct shash_desc *desc,
const u8 *data, unsigned int len)
{
struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
unsigned int bs = crypto_shash_blocksize(desc->tfm);
unsigned int offset, n;
/* check current buffer */
offset = ctx->buflen % bs;
ctx->buflen += len;
if (offset + len < bs)
goto store;
/* process one stored block */
if (offset) {
n = bs - offset;
memcpy(ctx->buf + offset, data, n);
ctx->gr0.iimp = 1;
_cpacf_kmac(&ctx->gr0.reg, ctx->param, ctx->buf, bs);
data += n;
len -= n;
offset = 0;
}
/* process as many blocks as possible */
if (len >= bs) {
n = (len / bs) * bs;
ctx->gr0.iimp = 1;
_cpacf_kmac(&ctx->gr0.reg, ctx->param, data, n);
data += n;
len -= n;
}
store:
/* store incomplete block in buffer */
if (len)
memcpy(ctx->buf + offset, data, len);
return 0;
}
static int s390_hmac_sha2_final(struct shash_desc *desc, u8 *out)
{
struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
unsigned int bs = crypto_shash_blocksize(desc->tfm);
ctx->gr0.iimp = 0;
kmac_sha2_set_imbl(ctx->param, ctx->buflen, bs);
_cpacf_kmac(&ctx->gr0.reg, ctx->param, ctx->buf, ctx->buflen % bs);
memcpy(out, ctx->param, crypto_shash_digestsize(desc->tfm));
return 0;
}
static int s390_hmac_sha2_digest(struct shash_desc *desc,
const u8 *data, unsigned int len, u8 *out)
{
struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
unsigned int ds = crypto_shash_digestsize(desc->tfm);
int rc;
rc = s390_hmac_sha2_init(desc);
if (rc)
return rc;
ctx->gr0.iimp = 0;
kmac_sha2_set_imbl(ctx->param, len,
crypto_shash_blocksize(desc->tfm));
_cpacf_kmac(&ctx->gr0.reg, ctx->param, data, len);
memcpy(out, ctx->param, ds);
return 0;
}
#define S390_HMAC_SHA2_ALG(x) { \
.fc = CPACF_KMAC_HMAC_SHA_##x, \
.alg = { \
.init = s390_hmac_sha2_init, \
.update = s390_hmac_sha2_update, \
.final = s390_hmac_sha2_final, \
.digest = s390_hmac_sha2_digest, \
.setkey = s390_hmac_sha2_setkey, \
.descsize = sizeof(struct s390_kmac_sha2_ctx), \
.halg = { \
.digestsize = SHA##x##_DIGEST_SIZE, \
.base = { \
.cra_name = "hmac(sha" #x ")", \
.cra_driver_name = "hmac_s390_sha" #x, \
.cra_blocksize = SHA##x##_BLOCK_SIZE, \
.cra_priority = 400, \
.cra_ctxsize = sizeof(struct s390_hmac_ctx), \
.cra_module = THIS_MODULE, \
}, \
}, \
}, \
}
static struct s390_hmac_alg {
bool registered;
unsigned int fc;
struct shash_alg alg;
} s390_hmac_algs[] = {
S390_HMAC_SHA2_ALG(224),
S390_HMAC_SHA2_ALG(256),
S390_HMAC_SHA2_ALG(384),
S390_HMAC_SHA2_ALG(512),
};
static __always_inline void _s390_hmac_algs_unregister(void)
{
struct s390_hmac_alg *hmac;
int i;
for (i = ARRAY_SIZE(s390_hmac_algs) - 1; i >= 0; i--) {
hmac = &s390_hmac_algs[i];
if (!hmac->registered)
continue;
crypto_unregister_shash(&hmac->alg);
}
}
static int __init hmac_s390_init(void)
{
struct s390_hmac_alg *hmac;
int i, rc = -ENODEV;
if (!cpacf_query_func(CPACF_KLMD, CPACF_KLMD_SHA_256))
return -ENODEV;
if (!cpacf_query_func(CPACF_KLMD, CPACF_KLMD_SHA_512))
return -ENODEV;
for (i = 0; i < ARRAY_SIZE(s390_hmac_algs); i++) {
hmac = &s390_hmac_algs[i];
if (!cpacf_query_func(CPACF_KMAC, hmac->fc))
continue;
rc = crypto_register_shash(&hmac->alg);
if (rc) {
pr_err("unable to register %s\n",
hmac->alg.halg.base.cra_name);
goto out;
}
hmac->registered = true;
pr_debug("registered %s\n", hmac->alg.halg.base.cra_name);
}
return rc;
out:
_s390_hmac_algs_unregister();
return rc;
}
static void __exit hmac_s390_exit(void)
{
_s390_hmac_algs_unregister();
}
module_cpu_feature_match(S390_CPU_FEATURE_MSA, hmac_s390_init);
module_exit(hmac_s390_exit);
MODULE_DESCRIPTION("S390 HMAC driver");
MODULE_LICENSE("GPL");
...@@ -133,8 +133,8 @@ static inline int __paes_keyblob2pkey(struct key_blob *kb, ...@@ -133,8 +133,8 @@ static inline int __paes_keyblob2pkey(struct key_blob *kb,
if (msleep_interruptible(1000)) if (msleep_interruptible(1000))
return -EINTR; return -EINTR;
} }
ret = pkey_keyblob2pkey(kb->key, kb->keylen, ret = pkey_key2protkey(kb->key, kb->keylen,
pk->protkey, &pk->len, &pk->type); pk->protkey, &pk->len, &pk->type);
} }
return ret; return ret;
......
...@@ -25,6 +25,7 @@ struct s390_sha_ctx { ...@@ -25,6 +25,7 @@ struct s390_sha_ctx {
u32 state[CPACF_MAX_PARMBLOCK_SIZE / sizeof(u32)]; u32 state[CPACF_MAX_PARMBLOCK_SIZE / sizeof(u32)];
u8 buf[SHA_MAX_BLOCK_SIZE]; u8 buf[SHA_MAX_BLOCK_SIZE];
int func; /* KIMD function to use */ int func; /* KIMD function to use */
int first_message_part;
}; };
struct shash_desc; struct shash_desc;
......
...@@ -21,9 +21,11 @@ static int sha3_256_init(struct shash_desc *desc) ...@@ -21,9 +21,11 @@ static int sha3_256_init(struct shash_desc *desc)
{ {
struct s390_sha_ctx *sctx = shash_desc_ctx(desc); struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
memset(sctx->state, 0, sizeof(sctx->state)); if (!test_facility(86)) /* msa 12 */
memset(sctx->state, 0, sizeof(sctx->state));
sctx->count = 0; sctx->count = 0;
sctx->func = CPACF_KIMD_SHA3_256; sctx->func = CPACF_KIMD_SHA3_256;
sctx->first_message_part = 1;
return 0; return 0;
} }
...@@ -36,6 +38,7 @@ static int sha3_256_export(struct shash_desc *desc, void *out) ...@@ -36,6 +38,7 @@ static int sha3_256_export(struct shash_desc *desc, void *out)
octx->rsiz = sctx->count; octx->rsiz = sctx->count;
memcpy(octx->st, sctx->state, sizeof(octx->st)); memcpy(octx->st, sctx->state, sizeof(octx->st));
memcpy(octx->buf, sctx->buf, sizeof(octx->buf)); memcpy(octx->buf, sctx->buf, sizeof(octx->buf));
octx->partial = sctx->first_message_part;
return 0; return 0;
} }
...@@ -48,6 +51,7 @@ static int sha3_256_import(struct shash_desc *desc, const void *in) ...@@ -48,6 +51,7 @@ static int sha3_256_import(struct shash_desc *desc, const void *in)
sctx->count = ictx->rsiz; sctx->count = ictx->rsiz;
memcpy(sctx->state, ictx->st, sizeof(ictx->st)); memcpy(sctx->state, ictx->st, sizeof(ictx->st));
memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf)); memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
sctx->first_message_part = ictx->partial;
sctx->func = CPACF_KIMD_SHA3_256; sctx->func = CPACF_KIMD_SHA3_256;
return 0; return 0;
...@@ -61,6 +65,7 @@ static int sha3_224_import(struct shash_desc *desc, const void *in) ...@@ -61,6 +65,7 @@ static int sha3_224_import(struct shash_desc *desc, const void *in)
sctx->count = ictx->rsiz; sctx->count = ictx->rsiz;
memcpy(sctx->state, ictx->st, sizeof(ictx->st)); memcpy(sctx->state, ictx->st, sizeof(ictx->st));
memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf)); memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
sctx->first_message_part = ictx->partial;
sctx->func = CPACF_KIMD_SHA3_224; sctx->func = CPACF_KIMD_SHA3_224;
return 0; return 0;
...@@ -88,9 +93,11 @@ static int sha3_224_init(struct shash_desc *desc) ...@@ -88,9 +93,11 @@ static int sha3_224_init(struct shash_desc *desc)
{ {
struct s390_sha_ctx *sctx = shash_desc_ctx(desc); struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
memset(sctx->state, 0, sizeof(sctx->state)); if (!test_facility(86)) /* msa 12 */
memset(sctx->state, 0, sizeof(sctx->state));
sctx->count = 0; sctx->count = 0;
sctx->func = CPACF_KIMD_SHA3_224; sctx->func = CPACF_KIMD_SHA3_224;
sctx->first_message_part = 1;
return 0; return 0;
} }
......
...@@ -20,9 +20,11 @@ static int sha3_512_init(struct shash_desc *desc) ...@@ -20,9 +20,11 @@ static int sha3_512_init(struct shash_desc *desc)
{ {
struct s390_sha_ctx *sctx = shash_desc_ctx(desc); struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
memset(sctx->state, 0, sizeof(sctx->state)); if (!test_facility(86)) /* msa 12 */
memset(sctx->state, 0, sizeof(sctx->state));
sctx->count = 0; sctx->count = 0;
sctx->func = CPACF_KIMD_SHA3_512; sctx->func = CPACF_KIMD_SHA3_512;
sctx->first_message_part = 1;
return 0; return 0;
} }
...@@ -37,6 +39,7 @@ static int sha3_512_export(struct shash_desc *desc, void *out) ...@@ -37,6 +39,7 @@ static int sha3_512_export(struct shash_desc *desc, void *out)
memcpy(octx->st, sctx->state, sizeof(octx->st)); memcpy(octx->st, sctx->state, sizeof(octx->st));
memcpy(octx->buf, sctx->buf, sizeof(octx->buf)); memcpy(octx->buf, sctx->buf, sizeof(octx->buf));
octx->partial = sctx->first_message_part;
return 0; return 0;
} }
...@@ -52,6 +55,7 @@ static int sha3_512_import(struct shash_desc *desc, const void *in) ...@@ -52,6 +55,7 @@ static int sha3_512_import(struct shash_desc *desc, const void *in)
memcpy(sctx->state, ictx->st, sizeof(ictx->st)); memcpy(sctx->state, ictx->st, sizeof(ictx->st));
memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf)); memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
sctx->first_message_part = ictx->partial;
sctx->func = CPACF_KIMD_SHA3_512; sctx->func = CPACF_KIMD_SHA3_512;
return 0; return 0;
...@@ -68,6 +72,7 @@ static int sha3_384_import(struct shash_desc *desc, const void *in) ...@@ -68,6 +72,7 @@ static int sha3_384_import(struct shash_desc *desc, const void *in)
memcpy(sctx->state, ictx->st, sizeof(ictx->st)); memcpy(sctx->state, ictx->st, sizeof(ictx->st));
memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf)); memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
sctx->first_message_part = ictx->partial;
sctx->func = CPACF_KIMD_SHA3_384; sctx->func = CPACF_KIMD_SHA3_384;
return 0; return 0;
...@@ -97,9 +102,11 @@ static int sha3_384_init(struct shash_desc *desc) ...@@ -97,9 +102,11 @@ static int sha3_384_init(struct shash_desc *desc)
{ {
struct s390_sha_ctx *sctx = shash_desc_ctx(desc); struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
memset(sctx->state, 0, sizeof(sctx->state)); if (!test_facility(86)) /* msa 12 */
memset(sctx->state, 0, sizeof(sctx->state));
sctx->count = 0; sctx->count = 0;
sctx->func = CPACF_KIMD_SHA3_384; sctx->func = CPACF_KIMD_SHA3_384;
sctx->first_message_part = 1;
return 0; return 0;
} }
......
...@@ -18,6 +18,7 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len) ...@@ -18,6 +18,7 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len)
struct s390_sha_ctx *ctx = shash_desc_ctx(desc); struct s390_sha_ctx *ctx = shash_desc_ctx(desc);
unsigned int bsize = crypto_shash_blocksize(desc->tfm); unsigned int bsize = crypto_shash_blocksize(desc->tfm);
unsigned int index, n; unsigned int index, n;
int fc;
/* how much is already in the buffer? */ /* how much is already in the buffer? */
index = ctx->count % bsize; index = ctx->count % bsize;
...@@ -26,10 +27,16 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len) ...@@ -26,10 +27,16 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len)
if ((index + len) < bsize) if ((index + len) < bsize)
goto store; goto store;
fc = ctx->func;
if (ctx->first_message_part)
fc |= test_facility(86) ? CPACF_KIMD_NIP : 0;
/* process one stored block */ /* process one stored block */
if (index) { if (index) {
memcpy(ctx->buf + index, data, bsize - index); memcpy(ctx->buf + index, data, bsize - index);
cpacf_kimd(ctx->func, ctx->state, ctx->buf, bsize); cpacf_kimd(fc, ctx->state, ctx->buf, bsize);
ctx->first_message_part = 0;
fc &= ~CPACF_KIMD_NIP;
data += bsize - index; data += bsize - index;
len -= bsize - index; len -= bsize - index;
index = 0; index = 0;
...@@ -38,7 +45,8 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len) ...@@ -38,7 +45,8 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len)
/* process as many blocks as possible */ /* process as many blocks as possible */
if (len >= bsize) { if (len >= bsize) {
n = (len / bsize) * bsize; n = (len / bsize) * bsize;
cpacf_kimd(ctx->func, ctx->state, data, n); cpacf_kimd(fc, ctx->state, data, n);
ctx->first_message_part = 0;
data += n; data += n;
len -= n; len -= n;
} }
...@@ -75,7 +83,7 @@ int s390_sha_final(struct shash_desc *desc, u8 *out) ...@@ -75,7 +83,7 @@ int s390_sha_final(struct shash_desc *desc, u8 *out)
unsigned int bsize = crypto_shash_blocksize(desc->tfm); unsigned int bsize = crypto_shash_blocksize(desc->tfm);
u64 bits; u64 bits;
unsigned int n; unsigned int n;
int mbl_offset; int mbl_offset, fc;
n = ctx->count % bsize; n = ctx->count % bsize;
bits = ctx->count * 8; bits = ctx->count * 8;
...@@ -109,7 +117,11 @@ int s390_sha_final(struct shash_desc *desc, u8 *out) ...@@ -109,7 +117,11 @@ int s390_sha_final(struct shash_desc *desc, u8 *out)
return -EINVAL; return -EINVAL;
} }
cpacf_klmd(ctx->func, ctx->state, ctx->buf, n); fc = ctx->func;
fc |= test_facility(86) ? CPACF_KLMD_DUFOP : 0;
if (ctx->first_message_part)
fc |= CPACF_KLMD_NIP;
cpacf_klmd(fc, ctx->state, ctx->buf, n);
/* copy digest to out */ /* copy digest to out */
memcpy(out, ctx->state, crypto_shash_digestsize(desc->tfm)); memcpy(out, ctx->state, crypto_shash_digestsize(desc->tfm));
......
...@@ -78,7 +78,6 @@ struct hypfs_dbfs_file { ...@@ -78,7 +78,6 @@ struct hypfs_dbfs_file {
struct dentry *dentry; struct dentry *dentry;
}; };
extern void hypfs_dbfs_exit(void);
extern void hypfs_dbfs_create_file(struct hypfs_dbfs_file *df); extern void hypfs_dbfs_create_file(struct hypfs_dbfs_file *df);
extern void hypfs_dbfs_remove_file(struct hypfs_dbfs_file *df); extern void hypfs_dbfs_remove_file(struct hypfs_dbfs_file *df);
......
...@@ -29,8 +29,6 @@ static enum diag204_format diag204_info_type; /* used diag 204 data format */ ...@@ -29,8 +29,6 @@ static enum diag204_format diag204_info_type; /* used diag 204 data format */
static void *diag204_buf; /* 4K aligned buffer for diag204 data */ static void *diag204_buf; /* 4K aligned buffer for diag204 data */
static int diag204_buf_pages; /* number of pages for diag204 data */ static int diag204_buf_pages; /* number of pages for diag204 data */
static struct dentry *dbfs_d204_file;
enum diag204_format diag204_get_info_type(void) enum diag204_format diag204_get_info_type(void)
{ {
return diag204_info_type; return diag204_info_type;
...@@ -214,16 +212,13 @@ __init int hypfs_diag_init(void) ...@@ -214,16 +212,13 @@ __init int hypfs_diag_init(void)
hypfs_dbfs_create_file(&dbfs_file_d204); hypfs_dbfs_create_file(&dbfs_file_d204);
rc = hypfs_diag_fs_init(); rc = hypfs_diag_fs_init();
if (rc) { if (rc)
pr_err("The hardware system does not provide all functions required by hypfs\n"); pr_err("The hardware system does not provide all functions required by hypfs\n");
debugfs_remove(dbfs_d204_file);
}
return rc; return rc;
} }
void hypfs_diag_exit(void) void hypfs_diag_exit(void)
{ {
debugfs_remove(dbfs_d204_file);
hypfs_diag_fs_exit(); hypfs_diag_fs_exit();
diag204_free_buffer(); diag204_free_buffer();
hypfs_dbfs_remove_file(&dbfs_file_d204); hypfs_dbfs_remove_file(&dbfs_file_d204);
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
#define _ASM_S390_ARCH_HWEIGHT_H #define _ASM_S390_ARCH_HWEIGHT_H
#include <linux/types.h> #include <linux/types.h>
#include <asm/march.h>
static __always_inline unsigned long popcnt_z196(unsigned long w) static __always_inline unsigned long popcnt_z196(unsigned long w)
{ {
...@@ -29,9 +30,9 @@ static __always_inline unsigned long popcnt_z15(unsigned long w) ...@@ -29,9 +30,9 @@ static __always_inline unsigned long popcnt_z15(unsigned long w)
static __always_inline unsigned long __arch_hweight64(__u64 w) static __always_inline unsigned long __arch_hweight64(__u64 w)
{ {
if (IS_ENABLED(CONFIG_HAVE_MARCH_Z15_FEATURES)) if (__is_defined(MARCH_HAS_Z15_FEATURES))
return popcnt_z15(w); return popcnt_z15(w);
if (IS_ENABLED(CONFIG_HAVE_MARCH_Z196_FEATURES)) { if (__is_defined(MARCH_HAS_Z196_FEATURES)) {
w = popcnt_z196(w); w = popcnt_z196(w);
w += w >> 32; w += w >> 32;
w += w >> 16; w += w >> 16;
...@@ -43,9 +44,9 @@ static __always_inline unsigned long __arch_hweight64(__u64 w) ...@@ -43,9 +44,9 @@ static __always_inline unsigned long __arch_hweight64(__u64 w)
static __always_inline unsigned int __arch_hweight32(unsigned int w) static __always_inline unsigned int __arch_hweight32(unsigned int w)
{ {
if (IS_ENABLED(CONFIG_HAVE_MARCH_Z15_FEATURES)) if (__is_defined(MARCH_HAS_Z15_FEATURES))
return popcnt_z15(w); return popcnt_z15(w);
if (IS_ENABLED(CONFIG_HAVE_MARCH_Z196_FEATURES)) { if (__is_defined(MARCH_HAS_Z196_FEATURES)) {
w = popcnt_z196(w); w = popcnt_z196(w);
w += w >> 16; w += w >> 16;
w += w >> 8; w += w >> 8;
...@@ -56,9 +57,9 @@ static __always_inline unsigned int __arch_hweight32(unsigned int w) ...@@ -56,9 +57,9 @@ static __always_inline unsigned int __arch_hweight32(unsigned int w)
static __always_inline unsigned int __arch_hweight16(unsigned int w) static __always_inline unsigned int __arch_hweight16(unsigned int w)
{ {
if (IS_ENABLED(CONFIG_HAVE_MARCH_Z15_FEATURES)) if (__is_defined(MARCH_HAS_Z15_FEATURES))
return popcnt_z15((unsigned short)w); return popcnt_z15((unsigned short)w);
if (IS_ENABLED(CONFIG_HAVE_MARCH_Z196_FEATURES)) { if (__is_defined(MARCH_HAS_Z196_FEATURES)) {
w = popcnt_z196(w); w = popcnt_z196(w);
w += w >> 8; w += w >> 8;
return w & 0xff; return w & 0xff;
...@@ -68,7 +69,7 @@ static __always_inline unsigned int __arch_hweight16(unsigned int w) ...@@ -68,7 +69,7 @@ static __always_inline unsigned int __arch_hweight16(unsigned int w)
static __always_inline unsigned int __arch_hweight8(unsigned int w) static __always_inline unsigned int __arch_hweight8(unsigned int w)
{ {
if (IS_ENABLED(CONFIG_HAVE_MARCH_Z196_FEATURES)) if (__is_defined(MARCH_HAS_Z196_FEATURES))
return popcnt_z196((unsigned char)w); return popcnt_z196((unsigned char)w);
return __sw_hweight8(w); return __sw_hweight8(w);
} }
......
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
#define __ARCH_S390_ATOMIC_OPS__ #define __ARCH_S390_ATOMIC_OPS__
#include <linux/limits.h> #include <linux/limits.h>
#include <asm/march.h>
static __always_inline int __atomic_read(const atomic_t *v) static __always_inline int __atomic_read(const atomic_t *v)
{ {
...@@ -56,7 +57,7 @@ static __always_inline void __atomic64_set(atomic64_t *v, s64 i) ...@@ -56,7 +57,7 @@ static __always_inline void __atomic64_set(atomic64_t *v, s64 i)
} }
} }
#ifdef CONFIG_HAVE_MARCH_Z196_FEATURES #ifdef MARCH_HAS_Z196_FEATURES
#define __ATOMIC_OP(op_name, op_type, op_string, op_barrier) \ #define __ATOMIC_OP(op_name, op_type, op_string, op_barrier) \
static __always_inline op_type op_name(op_type val, op_type *ptr) \ static __always_inline op_type op_name(op_type val, op_type *ptr) \
...@@ -107,7 +108,7 @@ __ATOMIC_CONST_OPS(__atomic64_add_const, long, "agsi") ...@@ -107,7 +108,7 @@ __ATOMIC_CONST_OPS(__atomic64_add_const, long, "agsi")
#undef __ATOMIC_CONST_OPS #undef __ATOMIC_CONST_OPS
#undef __ATOMIC_CONST_OP #undef __ATOMIC_CONST_OP
#else /* CONFIG_HAVE_MARCH_Z196_FEATURES */ #else /* MARCH_HAS_Z196_FEATURES */
#define __ATOMIC_OP(op_name, op_string) \ #define __ATOMIC_OP(op_name, op_string) \
static __always_inline int op_name(int val, int *ptr) \ static __always_inline int op_name(int val, int *ptr) \
...@@ -166,7 +167,7 @@ __ATOMIC64_OPS(__atomic64_xor, "xgr") ...@@ -166,7 +167,7 @@ __ATOMIC64_OPS(__atomic64_xor, "xgr")
#define __atomic64_add_const(val, ptr) __atomic64_add(val, ptr) #define __atomic64_add_const(val, ptr) __atomic64_add(val, ptr)
#define __atomic64_add_const_barrier(val, ptr) __atomic64_add(val, ptr) #define __atomic64_add_const_barrier(val, ptr) __atomic64_add(val, ptr)
#endif /* CONFIG_HAVE_MARCH_Z196_FEATURES */ #endif /* MARCH_HAS_Z196_FEATURES */
static __always_inline int __atomic_cmpxchg(int *ptr, int old, int new) static __always_inline int __atomic_cmpxchg(int *ptr, int old, int new)
{ {
......
...@@ -8,13 +8,15 @@ ...@@ -8,13 +8,15 @@
#ifndef __ASM_BARRIER_H #ifndef __ASM_BARRIER_H
#define __ASM_BARRIER_H #define __ASM_BARRIER_H
#include <asm/march.h>
/* /*
* Force strict CPU ordering. * Force strict CPU ordering.
* And yes, this is required on UP too when we're talking * And yes, this is required on UP too when we're talking
* to devices. * to devices.
*/ */
#ifdef CONFIG_HAVE_MARCH_Z196_FEATURES #ifdef MARCH_HAS_Z196_FEATURES
/* Fast-BCR without checkpoint synchronization */ /* Fast-BCR without checkpoint synchronization */
#define __ASM_BCR_SERIALIZE "bcr 14,0\n" #define __ASM_BCR_SERIALIZE "bcr 14,0\n"
#else #else
......
This diff is collapsed.
...@@ -202,8 +202,9 @@ union ctlreg0 { ...@@ -202,8 +202,9 @@ union ctlreg0 {
unsigned long : 3; unsigned long : 3;
unsigned long ccc : 1; /* Cryptography counter control */ unsigned long ccc : 1; /* Cryptography counter control */
unsigned long pec : 1; /* PAI extension control */ unsigned long pec : 1; /* PAI extension control */
unsigned long : 17; unsigned long : 15;
unsigned long : 3; unsigned long wti : 1; /* Warning-track */
unsigned long : 4;
unsigned long lap : 1; /* Low-address-protection control */ unsigned long lap : 1; /* Low-address-protection control */
unsigned long : 4; unsigned long : 4;
unsigned long edat : 1; /* Enhanced-DAT-enablement control */ unsigned long edat : 1; /* Enhanced-DAT-enablement control */
......
...@@ -38,6 +38,7 @@ enum diag_stat_enum { ...@@ -38,6 +38,7 @@ enum diag_stat_enum {
DIAG_STAT_X308, DIAG_STAT_X308,
DIAG_STAT_X318, DIAG_STAT_X318,
DIAG_STAT_X320, DIAG_STAT_X320,
DIAG_STAT_X49C,
DIAG_STAT_X500, DIAG_STAT_X500,
NR_DIAG_STAT NR_DIAG_STAT
}; };
...@@ -363,4 +364,12 @@ void _diag0c_amode31(unsigned long rx); ...@@ -363,4 +364,12 @@ void _diag0c_amode31(unsigned long rx);
void _diag308_reset_amode31(void); void _diag308_reset_amode31(void);
int _diag8c_amode31(struct diag8c *addr, struct ccw_dev_id *devno, size_t len); int _diag8c_amode31(struct diag8c *addr, struct ccw_dev_id *devno, size_t len);
/* diag 49c subcodes */
enum diag49c_sc {
DIAG49C_SUBC_ACK = 0,
DIAG49C_SUBC_REG = 1
};
int diag49c(unsigned long subcode);
#endif /* _ASM_S390_DIAG_H */ #endif /* _ASM_S390_DIAG_H */
...@@ -6,8 +6,23 @@ ...@@ -6,8 +6,23 @@
#define MCOUNT_INSN_SIZE 6 #define MCOUNT_INSN_SIZE 6
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm/stacktrace.h>
unsigned long return_address(unsigned int n); static __always_inline unsigned long return_address(unsigned int n)
{
struct stack_frame *sf;
if (!n)
return (unsigned long)__builtin_return_address(0);
sf = (struct stack_frame *)current_frame_address();
do {
sf = (struct stack_frame *)sf->back_chain;
if (!sf)
return 0;
} while (--n);
return sf->gprs[8];
}
#define ftrace_return_address(n) return_address(n) #define ftrace_return_address(n) return_address(n)
void ftrace_caller(void); void ftrace_caller(void);
......
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright IBM Corp. 2024
*/
#ifndef _ASM_HIPERDISPATCH_H
#define _ASM_HIPERDISPATCH_H
void hd_reset_state(void);
void hd_add_core(int cpu);
void hd_disable_hiperdispatch(void);
int hd_enable_hiperdispatch(void);
#endif /* _ASM_HIPERDISPATCH_H */
...@@ -47,6 +47,7 @@ enum interruption_class { ...@@ -47,6 +47,7 @@ enum interruption_class {
IRQEXT_CMS, IRQEXT_CMS,
IRQEXT_CMC, IRQEXT_CMC,
IRQEXT_FTP, IRQEXT_FTP,
IRQEXT_WTI,
IRQIO_CIO, IRQIO_CIO,
IRQIO_DAS, IRQIO_DAS,
IRQIO_C15, IRQIO_C15,
...@@ -99,6 +100,7 @@ int unregister_external_irq(u16 code, ext_int_handler_t handler); ...@@ -99,6 +100,7 @@ int unregister_external_irq(u16 code, ext_int_handler_t handler);
enum irq_subclass { enum irq_subclass {
IRQ_SUBCLASS_MEASUREMENT_ALERT = 5, IRQ_SUBCLASS_MEASUREMENT_ALERT = 5,
IRQ_SUBCLASS_SERVICE_SIGNAL = 9, IRQ_SUBCLASS_SERVICE_SIGNAL = 9,
IRQ_SUBCLASS_WARNING_TRACK = 33,
}; };
#define CR0_IRQ_SUBCLASS_MASK \ #define CR0_IRQ_SUBCLASS_MASK \
......
...@@ -98,8 +98,8 @@ struct lowcore { ...@@ -98,8 +98,8 @@ struct lowcore {
psw_t io_new_psw; /* 0x01f0 */ psw_t io_new_psw; /* 0x01f0 */
/* Save areas. */ /* Save areas. */
__u64 save_area_sync[8]; /* 0x0200 */ __u64 save_area[8]; /* 0x0200 */
__u64 save_area_async[8]; /* 0x0240 */ __u8 pad_0x0240[0x0280-0x0240]; /* 0x0240 */
__u64 save_area_restart[1]; /* 0x0280 */ __u64 save_area_restart[1]; /* 0x0280 */
__u64 pcpu; /* 0x0288 */ __u64 pcpu; /* 0x0288 */
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_S390_MARCH_H
#define __ASM_S390_MARCH_H
#include <linux/kconfig.h>
#define MARCH_HAS_Z10_FEATURES 1
#ifndef __DECOMPRESSOR
#ifdef CONFIG_HAVE_MARCH_Z196_FEATURES
#define MARCH_HAS_Z196_FEATURES 1
#endif
#ifdef CONFIG_HAVE_MARCH_ZEC12_FEATURES
#define MARCH_HAS_ZEC12_FEATURES 1
#endif
#ifdef CONFIG_HAVE_MARCH_Z13_FEATURES
#define MARCH_HAS_Z13_FEATURES 1
#endif
#ifdef CONFIG_HAVE_MARCH_Z14_FEATURES
#define MARCH_HAS_Z14_FEATURES 1
#endif
#ifdef CONFIG_HAVE_MARCH_Z15_FEATURES
#define MARCH_HAS_Z15_FEATURES 1
#endif
#ifdef CONFIG_HAVE_MARCH_Z16_FEATURES
#define MARCH_HAS_Z16_FEATURES 1
#endif
#endif /* __DECOMPRESSOR */
#endif /* __ASM_S390_MARCH_H */
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
#include <linux/preempt.h> #include <linux/preempt.h>
#include <asm/cmpxchg.h> #include <asm/cmpxchg.h>
#include <asm/march.h>
/* /*
* s390 uses its own implementation for per cpu data, the offset of * s390 uses its own implementation for per cpu data, the offset of
...@@ -50,7 +51,7 @@ ...@@ -50,7 +51,7 @@
#define this_cpu_or_1(pcp, val) arch_this_cpu_to_op_simple(pcp, val, |) #define this_cpu_or_1(pcp, val) arch_this_cpu_to_op_simple(pcp, val, |)
#define this_cpu_or_2(pcp, val) arch_this_cpu_to_op_simple(pcp, val, |) #define this_cpu_or_2(pcp, val) arch_this_cpu_to_op_simple(pcp, val, |)
#ifndef CONFIG_HAVE_MARCH_Z196_FEATURES #ifndef MARCH_HAS_Z196_FEATURES
#define this_cpu_add_4(pcp, val) arch_this_cpu_to_op_simple(pcp, val, +) #define this_cpu_add_4(pcp, val) arch_this_cpu_to_op_simple(pcp, val, +)
#define this_cpu_add_8(pcp, val) arch_this_cpu_to_op_simple(pcp, val, +) #define this_cpu_add_8(pcp, val) arch_this_cpu_to_op_simple(pcp, val, +)
...@@ -61,7 +62,7 @@ ...@@ -61,7 +62,7 @@
#define this_cpu_or_4(pcp, val) arch_this_cpu_to_op_simple(pcp, val, |) #define this_cpu_or_4(pcp, val) arch_this_cpu_to_op_simple(pcp, val, |)
#define this_cpu_or_8(pcp, val) arch_this_cpu_to_op_simple(pcp, val, |) #define this_cpu_or_8(pcp, val) arch_this_cpu_to_op_simple(pcp, val, |)
#else /* CONFIG_HAVE_MARCH_Z196_FEATURES */ #else /* MARCH_HAS_Z196_FEATURES */
#define arch_this_cpu_add(pcp, val, op1, op2, szcast) \ #define arch_this_cpu_add(pcp, val, op1, op2, szcast) \
{ \ { \
...@@ -129,7 +130,7 @@ ...@@ -129,7 +130,7 @@
#define this_cpu_or_4(pcp, val) arch_this_cpu_to_op(pcp, val, "lao") #define this_cpu_or_4(pcp, val) arch_this_cpu_to_op(pcp, val, "lao")
#define this_cpu_or_8(pcp, val) arch_this_cpu_to_op(pcp, val, "laog") #define this_cpu_or_8(pcp, val) arch_this_cpu_to_op(pcp, val, "laog")
#endif /* CONFIG_HAVE_MARCH_Z196_FEATURES */ #endif /* MARCH_HAS_Z196_FEATURES */
#define arch_this_cpu_cmpxchg(pcp, oval, nval) \ #define arch_this_cpu_cmpxchg(pcp, oval, nval) \
({ \ ({ \
......
...@@ -48,30 +48,6 @@ struct perf_sf_sde_regs { ...@@ -48,30 +48,6 @@ struct perf_sf_sde_regs {
unsigned long reserved:63; /* reserved */ unsigned long reserved:63; /* reserved */
}; };
/* Perf PMU definitions for the counter facility */
#define PERF_CPUM_CF_MAX_CTR 0xffffUL /* Max ctr for ECCTR */
/* Perf PMU definitions for the sampling facility */
#define PERF_CPUM_SF_MAX_CTR 2
#define PERF_EVENT_CPUM_SF 0xB0000UL /* Event: Basic-sampling */
#define PERF_EVENT_CPUM_SF_DIAG 0xBD000UL /* Event: Combined-sampling */
#define PERF_EVENT_CPUM_CF_DIAG 0xBC000UL /* Event: Counter sets */
#define PERF_CPUM_SF_BASIC_MODE 0x0001 /* Basic-sampling flag */
#define PERF_CPUM_SF_DIAG_MODE 0x0002 /* Diagnostic-sampling flag */
#define PERF_CPUM_SF_MODE_MASK (PERF_CPUM_SF_BASIC_MODE| \
PERF_CPUM_SF_DIAG_MODE)
#define PERF_CPUM_SF_FREQ_MODE 0x0008 /* Sampling with frequency */
#define REG_NONE 0
#define REG_OVERFLOW 1
#define OVERFLOW_REG(hwc) ((hwc)->extra_reg.config)
#define SFB_ALLOC_REG(hwc) ((hwc)->extra_reg.alloc)
#define TEAR_REG(hwc) ((hwc)->last_tag)
#define SAMPL_RATE(hwc) ((hwc)->event_base)
#define SAMPL_FLAGS(hwc) ((hwc)->config_base)
#define SAMPL_DIAG_MODE(hwc) (SAMPL_FLAGS(hwc) & PERF_CPUM_SF_DIAG_MODE)
#define SAMPLE_FREQ_MODE(hwc) (SAMPL_FLAGS(hwc) & PERF_CPUM_SF_FREQ_MODE)
#define perf_arch_fetch_caller_regs(regs, __ip) do { \ #define perf_arch_fetch_caller_regs(regs, __ip) do { \
(regs)->psw.addr = (__ip); \ (regs)->psw.addr = (__ip); \
(regs)->gprs[15] = (unsigned long)__builtin_frame_address(0) - \ (regs)->gprs[15] = (unsigned long)__builtin_frame_address(0) - \
......
...@@ -22,7 +22,7 @@ ...@@ -22,7 +22,7 @@
* @param protkey pointer to buffer receiving the protected key * @param protkey pointer to buffer receiving the protected key
* @return 0 on success, negative errno value on failure * @return 0 on success, negative errno value on failure
*/ */
int pkey_keyblob2pkey(const u8 *key, u32 keylen, int pkey_key2protkey(const u8 *key, u32 keylen,
u8 *protkey, u32 *protkeylen, u32 *protkeytype); u8 *protkey, u32 *protkeylen, u32 *protkeytype);
#endif /* _KAPI_PKEY_H */ #endif /* _KAPI_PKEY_H */
...@@ -5,8 +5,9 @@ ...@@ -5,8 +5,9 @@
#include <asm/current.h> #include <asm/current.h>
#include <linux/thread_info.h> #include <linux/thread_info.h>
#include <asm/atomic_ops.h> #include <asm/atomic_ops.h>
#include <asm/march.h>
#ifdef CONFIG_HAVE_MARCH_Z196_FEATURES #ifdef MARCH_HAS_Z196_FEATURES
/* We use the MSB mostly because its available */ /* We use the MSB mostly because its available */
#define PREEMPT_NEED_RESCHED 0x80000000 #define PREEMPT_NEED_RESCHED 0x80000000
...@@ -75,7 +76,7 @@ static __always_inline bool should_resched(int preempt_offset) ...@@ -75,7 +76,7 @@ static __always_inline bool should_resched(int preempt_offset)
preempt_offset); preempt_offset);
} }
#else /* CONFIG_HAVE_MARCH_Z196_FEATURES */ #else /* MARCH_HAS_Z196_FEATURES */
#define PREEMPT_ENABLED (0) #define PREEMPT_ENABLED (0)
...@@ -123,7 +124,7 @@ static __always_inline bool should_resched(int preempt_offset) ...@@ -123,7 +124,7 @@ static __always_inline bool should_resched(int preempt_offset)
tif_need_resched()); tif_need_resched());
} }
#endif /* CONFIG_HAVE_MARCH_Z196_FEATURES */ #endif /* MARCH_HAS_Z196_FEATURES */
#define init_task_preempt_count(p) do { } while (0) #define init_task_preempt_count(p) do { } while (0)
/* Deferred to CPU bringup time */ /* Deferred to CPU bringup time */
......
...@@ -44,6 +44,7 @@ struct pcpu { ...@@ -44,6 +44,7 @@ struct pcpu {
unsigned long ec_mask; /* bit mask for ec_xxx functions */ unsigned long ec_mask; /* bit mask for ec_xxx functions */
unsigned long ec_clk; /* sigp timestamp for ec_xxx */ unsigned long ec_clk; /* sigp timestamp for ec_xxx */
unsigned long flags; /* per CPU flags */ unsigned long flags; /* per CPU flags */
unsigned long capacity; /* cpu capacity for scheduler */
signed char state; /* physical cpu state */ signed char state; /* physical cpu state */
signed char polarization; /* physical polarization */ signed char polarization; /* physical polarization */
u16 address; /* physical cpu address */ u16 address; /* physical cpu address */
......
...@@ -72,6 +72,7 @@ struct sclp_info { ...@@ -72,6 +72,7 @@ struct sclp_info {
unsigned char has_core_type : 1; unsigned char has_core_type : 1;
unsigned char has_sprp : 1; unsigned char has_sprp : 1;
unsigned char has_hvs : 1; unsigned char has_hvs : 1;
unsigned char has_wti : 1;
unsigned char has_esca : 1; unsigned char has_esca : 1;
unsigned char has_sief2 : 1; unsigned char has_sief2 : 1;
unsigned char has_64bscao : 1; unsigned char has_64bscao : 1;
......
...@@ -34,6 +34,7 @@ ...@@ -34,6 +34,7 @@
#define MACHINE_FLAG_SCC BIT(17) #define MACHINE_FLAG_SCC BIT(17)
#define MACHINE_FLAG_PCI_MIO BIT(18) #define MACHINE_FLAG_PCI_MIO BIT(18)
#define MACHINE_FLAG_RDP BIT(19) #define MACHINE_FLAG_RDP BIT(19)
#define MACHINE_FLAG_SEQ_INSN BIT(20)
#define LPP_MAGIC BIT(31) #define LPP_MAGIC BIT(31)
#define LPP_PID_MASK _AC(0xffffffff, UL) #define LPP_PID_MASK _AC(0xffffffff, UL)
...@@ -95,6 +96,7 @@ extern unsigned long mio_wb_bit_mask; ...@@ -95,6 +96,7 @@ extern unsigned long mio_wb_bit_mask;
#define MACHINE_HAS_SCC (get_lowcore()->machine_flags & MACHINE_FLAG_SCC) #define MACHINE_HAS_SCC (get_lowcore()->machine_flags & MACHINE_FLAG_SCC)
#define MACHINE_HAS_PCI_MIO (get_lowcore()->machine_flags & MACHINE_FLAG_PCI_MIO) #define MACHINE_HAS_PCI_MIO (get_lowcore()->machine_flags & MACHINE_FLAG_PCI_MIO)
#define MACHINE_HAS_RDP (get_lowcore()->machine_flags & MACHINE_FLAG_RDP) #define MACHINE_HAS_RDP (get_lowcore()->machine_flags & MACHINE_FLAG_RDP)
#define MACHINE_HAS_SEQ_INSN (get_lowcore()->machine_flags & MACHINE_FLAG_SEQ_INSN)
/* /*
* Console mode. Override with conmode= * Console mode. Override with conmode=
...@@ -115,6 +117,8 @@ extern unsigned int console_irq; ...@@ -115,6 +117,8 @@ extern unsigned int console_irq;
#define SET_CONSOLE_VT220 do { console_mode = 4; } while (0) #define SET_CONSOLE_VT220 do { console_mode = 4; } while (0)
#define SET_CONSOLE_HVC do { console_mode = 5; } while (0) #define SET_CONSOLE_HVC do { console_mode = 5; } while (0)
void register_early_console(void);
#ifdef CONFIG_VMCP #ifdef CONFIG_VMCP
void vmcp_cma_reserve(void); void vmcp_cma_reserve(void);
#else #else
......
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <asm/processor.h> #include <asm/processor.h>
#define raw_smp_processor_id() (get_lowcore()->cpu_nr) #define raw_smp_processor_id() (get_lowcore()->cpu_nr)
#define arch_scale_cpu_capacity smp_cpu_get_capacity
extern struct mutex smp_cpu_state_mutex; extern struct mutex smp_cpu_state_mutex;
extern unsigned int smp_cpu_mt_shift; extern unsigned int smp_cpu_mt_shift;
...@@ -34,6 +35,9 @@ extern void smp_save_dump_secondary_cpus(void); ...@@ -34,6 +35,9 @@ extern void smp_save_dump_secondary_cpus(void);
extern void smp_yield_cpu(int cpu); extern void smp_yield_cpu(int cpu);
extern void smp_cpu_set_polarization(int cpu, int val); extern void smp_cpu_set_polarization(int cpu, int val);
extern int smp_cpu_get_polarization(int cpu); extern int smp_cpu_get_polarization(int cpu);
extern void smp_cpu_set_capacity(int cpu, unsigned long val);
extern void smp_set_core_capacity(int cpu, unsigned long val);
extern unsigned long smp_cpu_get_capacity(int cpu);
extern int smp_cpu_get_cpu_address(int cpu); extern int smp_cpu_get_cpu_address(int cpu);
extern void smp_fill_possible_mask(void); extern void smp_fill_possible_mask(void);
extern void smp_detect_cpus(void); extern void smp_detect_cpus(void);
......
...@@ -67,6 +67,9 @@ static inline void topology_expect_change(void) { } ...@@ -67,6 +67,9 @@ static inline void topology_expect_change(void) { }
#define POLARIZATION_VM (2) #define POLARIZATION_VM (2)
#define POLARIZATION_VH (3) #define POLARIZATION_VH (3)
#define CPU_CAPACITY_HIGH SCHED_CAPACITY_SCALE
#define CPU_CAPACITY_LOW (SCHED_CAPACITY_SCALE >> 3)
#define SD_BOOK_INIT SD_CPU_INIT #define SD_BOOK_INIT SD_CPU_INIT
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
......
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Tracepoint header for hiperdispatch
*
* Copyright IBM Corp. 2024
*/
#undef TRACE_SYSTEM
#define TRACE_SYSTEM s390
#if !defined(_TRACE_S390_HIPERDISPATCH_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_S390_HIPERDISPATCH_H
#include <linux/tracepoint.h>
#undef TRACE_INCLUDE_PATH
#undef TRACE_INCLUDE_FILE
#define TRACE_INCLUDE_PATH asm/trace
#define TRACE_INCLUDE_FILE hiperdispatch
TRACE_EVENT(s390_hd_work_fn,
TP_PROTO(int steal_time_percentage,
int entitled_core_count,
int highcap_core_count),
TP_ARGS(steal_time_percentage,
entitled_core_count,
highcap_core_count),
TP_STRUCT__entry(__field(int, steal_time_percentage)
__field(int, entitled_core_count)
__field(int, highcap_core_count)),
TP_fast_assign(__entry->steal_time_percentage = steal_time_percentage;
__entry->entitled_core_count = entitled_core_count;
__entry->highcap_core_count = highcap_core_count;),
TP_printk("steal: %d entitled_core_count: %d highcap_core_count: %d",
__entry->steal_time_percentage,
__entry->entitled_core_count,
__entry->highcap_core_count)
);
TRACE_EVENT(s390_hd_rebuild_domains,
TP_PROTO(int current_highcap_core_count,
int new_highcap_core_count),
TP_ARGS(current_highcap_core_count,
new_highcap_core_count),
TP_STRUCT__entry(__field(int, current_highcap_core_count)
__field(int, new_highcap_core_count)),
TP_fast_assign(__entry->current_highcap_core_count = current_highcap_core_count;
__entry->new_highcap_core_count = new_highcap_core_count),
TP_printk("change highcap_core_count: %u -> %u",
__entry->current_highcap_core_count,
__entry->new_highcap_core_count)
);
#endif /* _TRACE_S390_HIPERDISPATCH_H */
/* This part must be outside protection */
#include <trace/define_trace.h>
...@@ -41,6 +41,10 @@ ...@@ -41,6 +41,10 @@
#define PKEY_KEYTYPE_ECC_P521 7 #define PKEY_KEYTYPE_ECC_P521 7
#define PKEY_KEYTYPE_ECC_ED25519 8 #define PKEY_KEYTYPE_ECC_ED25519 8
#define PKEY_KEYTYPE_ECC_ED448 9 #define PKEY_KEYTYPE_ECC_ED448 9
#define PKEY_KEYTYPE_AES_XTS_128 10
#define PKEY_KEYTYPE_AES_XTS_256 11
#define PKEY_KEYTYPE_HMAC_512 12
#define PKEY_KEYTYPE_HMAC_1024 13
/* the newer ioctls use a pkey_key_type enum for type information */ /* the newer ioctls use a pkey_key_type enum for type information */
enum pkey_key_type { enum pkey_key_type {
...@@ -50,6 +54,7 @@ enum pkey_key_type { ...@@ -50,6 +54,7 @@ enum pkey_key_type {
PKEY_TYPE_CCA_ECC = (__u32) 0x1f, PKEY_TYPE_CCA_ECC = (__u32) 0x1f,
PKEY_TYPE_EP11_AES = (__u32) 6, PKEY_TYPE_EP11_AES = (__u32) 6,
PKEY_TYPE_EP11_ECC = (__u32) 7, PKEY_TYPE_EP11_ECC = (__u32) 7,
PKEY_TYPE_PROTKEY = (__u32) 8,
}; };
/* the newer ioctls use a pkey_key_size enum for key size information */ /* the newer ioctls use a pkey_key_size enum for key size information */
......
...@@ -36,22 +36,23 @@ CFLAGS_stacktrace.o += -fno-optimize-sibling-calls ...@@ -36,22 +36,23 @@ CFLAGS_stacktrace.o += -fno-optimize-sibling-calls
CFLAGS_dumpstack.o += -fno-optimize-sibling-calls CFLAGS_dumpstack.o += -fno-optimize-sibling-calls
CFLAGS_unwind_bc.o += -fno-optimize-sibling-calls CFLAGS_unwind_bc.o += -fno-optimize-sibling-calls
obj-y := head64.o traps.o time.o process.o earlypgm.o early.o setup.o idle.o vtime.o obj-y := head64.o traps.o time.o process.o early.o setup.o idle.o vtime.o
obj-y += processor.o syscall.o ptrace.o signal.o cpcmd.o ebcdic.o nmi.o obj-y += processor.o syscall.o ptrace.o signal.o cpcmd.o ebcdic.o nmi.o
obj-y += debug.o irq.o ipl.o dis.o diag.o vdso.o cpufeature.o obj-y += debug.o irq.o ipl.o dis.o diag.o vdso.o cpufeature.o
obj-y += sysinfo.o lgr.o os_info.o ctlreg.o obj-y += sysinfo.o lgr.o os_info.o ctlreg.o
obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o sthyi.o obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o sthyi.o
obj-y += entry.o reipl.o kdebugfs.o alternative.o obj-y += entry.o reipl.o kdebugfs.o alternative.o
obj-y += nospec-branch.o ipl_vmparm.o machine_kexec_reloc.o unwind_bc.o obj-y += nospec-branch.o ipl_vmparm.o machine_kexec_reloc.o unwind_bc.o
obj-y += smp.o text_amode31.o stacktrace.o abs_lowcore.o facility.o uv.o obj-y += smp.o text_amode31.o stacktrace.o abs_lowcore.o facility.o uv.o wti.o
extra-y += vmlinux.lds extra-y += vmlinux.lds
obj-$(CONFIG_SYSFS) += nospec-sysfs.o obj-$(CONFIG_SYSFS) += nospec-sysfs.o
CFLAGS_REMOVE_nospec-branch.o += $(CC_FLAGS_EXPOLINE) CFLAGS_REMOVE_nospec-branch.o += $(CC_FLAGS_EXPOLINE)
obj-$(CONFIG_SYSFS) += cpacf.o
obj-$(CONFIG_MODULES) += module.o obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_SCHED_TOPOLOGY) += topology.o obj-$(CONFIG_SCHED_TOPOLOGY) += topology.o hiperdispatch.o
obj-$(CONFIG_NUMA) += numa.o obj-$(CONFIG_NUMA) += numa.o
obj-$(CONFIG_AUDIT) += audit.o obj-$(CONFIG_AUDIT) += audit.o
compat-obj-$(CONFIG_AUDIT) += compat_audit.o compat-obj-$(CONFIG_AUDIT) += compat_audit.o
......
...@@ -112,8 +112,7 @@ int main(void) ...@@ -112,8 +112,7 @@ int main(void)
OFFSET(__LC_MCK_NEW_PSW, lowcore, mcck_new_psw); OFFSET(__LC_MCK_NEW_PSW, lowcore, mcck_new_psw);
OFFSET(__LC_IO_NEW_PSW, lowcore, io_new_psw); OFFSET(__LC_IO_NEW_PSW, lowcore, io_new_psw);
/* software defined lowcore locations 0x200 - 0xdff*/ /* software defined lowcore locations 0x200 - 0xdff*/
OFFSET(__LC_SAVE_AREA_SYNC, lowcore, save_area_sync); OFFSET(__LC_SAVE_AREA, lowcore, save_area);
OFFSET(__LC_SAVE_AREA_ASYNC, lowcore, save_area_async);
OFFSET(__LC_SAVE_AREA_RESTART, lowcore, save_area_restart); OFFSET(__LC_SAVE_AREA_RESTART, lowcore, save_area_restart);
OFFSET(__LC_PCPU, lowcore, pcpu); OFFSET(__LC_PCPU, lowcore, pcpu);
OFFSET(__LC_RETURN_PSW, lowcore, return_psw); OFFSET(__LC_RETURN_PSW, lowcore, return_psw);
......
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright IBM Corp. 2024
*/
#define KMSG_COMPONENT "cpacf"
#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
#include <linux/cpu.h>
#include <linux/device.h>
#include <linux/sysfs.h>
#include <asm/cpacf.h>
#define CPACF_QUERY(name, instruction) \
static ssize_t name##_query_raw_read(struct file *fp, \
struct kobject *kobj, \
struct bin_attribute *attr, \
char *buf, loff_t offs, \
size_t count) \
{ \
cpacf_mask_t mask; \
\
if (!cpacf_query(CPACF_##instruction, &mask)) \
return -EOPNOTSUPP; \
return memory_read_from_buffer(buf, count, &offs, &mask, sizeof(mask)); \
} \
static BIN_ATTR_RO(name##_query_raw, sizeof(cpacf_mask_t))
CPACF_QUERY(km, KM);
CPACF_QUERY(kmc, KMC);
CPACF_QUERY(kimd, KIMD);
CPACF_QUERY(klmd, KLMD);
CPACF_QUERY(kmac, KMAC);
CPACF_QUERY(pckmo, PCKMO);
CPACF_QUERY(kmf, KMF);
CPACF_QUERY(kmctr, KMCTR);
CPACF_QUERY(kmo, KMO);
CPACF_QUERY(pcc, PCC);
CPACF_QUERY(prno, PRNO);
CPACF_QUERY(kma, KMA);
CPACF_QUERY(kdsa, KDSA);
#define CPACF_QAI(name, instruction) \
static ssize_t name##_query_auth_info_raw_read( \
struct file *fp, struct kobject *kobj, \
struct bin_attribute *attr, char *buf, loff_t offs, \
size_t count) \
{ \
cpacf_qai_t qai; \
\
if (!cpacf_qai(CPACF_##instruction, &qai)) \
return -EOPNOTSUPP; \
return memory_read_from_buffer(buf, count, &offs, &qai, \
sizeof(qai)); \
} \
static BIN_ATTR_RO(name##_query_auth_info_raw, sizeof(cpacf_qai_t))
CPACF_QAI(km, KM);
CPACF_QAI(kmc, KMC);
CPACF_QAI(kimd, KIMD);
CPACF_QAI(klmd, KLMD);
CPACF_QAI(kmac, KMAC);
CPACF_QAI(pckmo, PCKMO);
CPACF_QAI(kmf, KMF);
CPACF_QAI(kmctr, KMCTR);
CPACF_QAI(kmo, KMO);
CPACF_QAI(pcc, PCC);
CPACF_QAI(prno, PRNO);
CPACF_QAI(kma, KMA);
CPACF_QAI(kdsa, KDSA);
static struct bin_attribute *cpacf_attrs[] = {
&bin_attr_km_query_raw,
&bin_attr_kmc_query_raw,
&bin_attr_kimd_query_raw,
&bin_attr_klmd_query_raw,
&bin_attr_kmac_query_raw,
&bin_attr_pckmo_query_raw,
&bin_attr_kmf_query_raw,
&bin_attr_kmctr_query_raw,
&bin_attr_kmo_query_raw,
&bin_attr_pcc_query_raw,
&bin_attr_prno_query_raw,
&bin_attr_kma_query_raw,
&bin_attr_kdsa_query_raw,
&bin_attr_km_query_auth_info_raw,
&bin_attr_kmc_query_auth_info_raw,
&bin_attr_kimd_query_auth_info_raw,
&bin_attr_klmd_query_auth_info_raw,
&bin_attr_kmac_query_auth_info_raw,
&bin_attr_pckmo_query_auth_info_raw,
&bin_attr_kmf_query_auth_info_raw,
&bin_attr_kmctr_query_auth_info_raw,
&bin_attr_kmo_query_auth_info_raw,
&bin_attr_pcc_query_auth_info_raw,
&bin_attr_prno_query_auth_info_raw,
&bin_attr_kma_query_auth_info_raw,
&bin_attr_kdsa_query_auth_info_raw,
NULL,
};
static const struct attribute_group cpacf_attr_grp = {
.name = "cpacf",
.bin_attrs = cpacf_attrs,
};
static int __init cpacf_init(void)
{
struct device *cpu_root;
int rc = 0;
cpu_root = bus_get_dev_root(&cpu_subsys);
if (cpu_root) {
rc = sysfs_create_group(&cpu_root->kobj, &cpacf_attr_grp);
put_device(cpu_root);
}
return rc;
}
device_initcall(cpacf_init);
...@@ -52,6 +52,7 @@ static const struct diag_desc diag_map[NR_DIAG_STAT] = { ...@@ -52,6 +52,7 @@ static const struct diag_desc diag_map[NR_DIAG_STAT] = {
[DIAG_STAT_X308] = { .code = 0x308, .name = "List-Directed IPL" }, [DIAG_STAT_X308] = { .code = 0x308, .name = "List-Directed IPL" },
[DIAG_STAT_X318] = { .code = 0x318, .name = "CP Name and Version Codes" }, [DIAG_STAT_X318] = { .code = 0x318, .name = "CP Name and Version Codes" },
[DIAG_STAT_X320] = { .code = 0x320, .name = "Certificate Store" }, [DIAG_STAT_X320] = { .code = 0x320, .name = "Certificate Store" },
[DIAG_STAT_X49C] = { .code = 0x49c, .name = "Warning-Track Interruption" },
[DIAG_STAT_X500] = { .code = 0x500, .name = "Virtio Service" }, [DIAG_STAT_X500] = { .code = 0x500, .name = "Virtio Service" },
}; };
...@@ -303,3 +304,19 @@ int diag26c(void *req, void *resp, enum diag26c_sc subcode) ...@@ -303,3 +304,19 @@ int diag26c(void *req, void *resp, enum diag26c_sc subcode)
return diag_amode31_ops.diag26c(virt_to_phys(req), virt_to_phys(resp), subcode); return diag_amode31_ops.diag26c(virt_to_phys(req), virt_to_phys(resp), subcode);
} }
EXPORT_SYMBOL(diag26c); EXPORT_SYMBOL(diag26c);
int diag49c(unsigned long subcode)
{
int rc;
diag_stat_inc(DIAG_STAT_X49C);
asm volatile(
" diag %[subcode],0,0x49c\n"
" ipm %[rc]\n"
" srl %[rc],28\n"
: [rc] "=d" (rc)
: [subcode] "d" (subcode)
: "cc");
return rc;
}
EXPORT_SYMBOL(diag49c);
...@@ -122,6 +122,7 @@ enum { ...@@ -122,6 +122,7 @@ enum {
U8_32, /* 8 bit unsigned value starting at 32 */ U8_32, /* 8 bit unsigned value starting at 32 */
U12_16, /* 12 bit unsigned value starting at 16 */ U12_16, /* 12 bit unsigned value starting at 16 */
U16_16, /* 16 bit unsigned value starting at 16 */ U16_16, /* 16 bit unsigned value starting at 16 */
U16_20, /* 16 bit unsigned value starting at 20 */
U16_32, /* 16 bit unsigned value starting at 32 */ U16_32, /* 16 bit unsigned value starting at 32 */
U32_16, /* 32 bit unsigned value starting at 16 */ U32_16, /* 32 bit unsigned value starting at 16 */
VX_12, /* Vector index register starting at position 12 */ VX_12, /* Vector index register starting at position 12 */
...@@ -184,6 +185,7 @@ static const struct s390_operand operands[] = { ...@@ -184,6 +185,7 @@ static const struct s390_operand operands[] = {
[U8_32] = { 8, 32, 0 }, [U8_32] = { 8, 32, 0 },
[U12_16] = { 12, 16, 0 }, [U12_16] = { 12, 16, 0 },
[U16_16] = { 16, 16, 0 }, [U16_16] = { 16, 16, 0 },
[U16_20] = { 16, 20, 0 },
[U16_32] = { 16, 32, 0 }, [U16_32] = { 16, 32, 0 },
[U32_16] = { 32, 16, 0 }, [U32_16] = { 32, 16, 0 },
[VX_12] = { 4, 12, OPERAND_INDEX | OPERAND_VR }, [VX_12] = { 4, 12, OPERAND_INDEX | OPERAND_VR },
...@@ -257,7 +259,6 @@ static const unsigned char formats[][6] = { ...@@ -257,7 +259,6 @@ static const unsigned char formats[][6] = {
[INSTR_RSL_R0RD] = { D_20, L4_8, B_16, 0, 0, 0 }, [INSTR_RSL_R0RD] = { D_20, L4_8, B_16, 0, 0, 0 },
[INSTR_RSY_AARD] = { A_8, A_12, D20_20, B_16, 0, 0 }, [INSTR_RSY_AARD] = { A_8, A_12, D20_20, B_16, 0, 0 },
[INSTR_RSY_CCRD] = { C_8, C_12, D20_20, B_16, 0, 0 }, [INSTR_RSY_CCRD] = { C_8, C_12, D20_20, B_16, 0, 0 },
[INSTR_RSY_RDRU] = { R_8, D20_20, B_16, U4_12, 0, 0 },
[INSTR_RSY_RRRD] = { R_8, R_12, D20_20, B_16, 0, 0 }, [INSTR_RSY_RRRD] = { R_8, R_12, D20_20, B_16, 0, 0 },
[INSTR_RSY_RURD] = { R_8, U4_12, D20_20, B_16, 0, 0 }, [INSTR_RSY_RURD] = { R_8, U4_12, D20_20, B_16, 0, 0 },
[INSTR_RSY_RURD2] = { R_8, D20_20, B_16, U4_12, 0, 0 }, [INSTR_RSY_RURD2] = { R_8, D20_20, B_16, U4_12, 0, 0 },
...@@ -300,14 +301,17 @@ static const unsigned char formats[][6] = { ...@@ -300,14 +301,17 @@ static const unsigned char formats[][6] = {
[INSTR_VRI_V0UU2] = { V_8, U16_16, U4_32, 0, 0, 0 }, [INSTR_VRI_V0UU2] = { V_8, U16_16, U4_32, 0, 0, 0 },
[INSTR_VRI_V0UUU] = { V_8, U8_16, U8_24, U4_32, 0, 0 }, [INSTR_VRI_V0UUU] = { V_8, U8_16, U8_24, U4_32, 0, 0 },
[INSTR_VRI_VR0UU] = { V_8, R_12, U8_28, U4_24, 0, 0 }, [INSTR_VRI_VR0UU] = { V_8, R_12, U8_28, U4_24, 0, 0 },
[INSTR_VRI_VV0UU] = { V_8, V_12, U8_28, U4_24, 0, 0 },
[INSTR_VRI_VVUU] = { V_8, V_12, U16_16, U4_32, 0, 0 }, [INSTR_VRI_VVUU] = { V_8, V_12, U16_16, U4_32, 0, 0 },
[INSTR_VRI_VVUUU] = { V_8, V_12, U12_16, U4_32, U4_28, 0 }, [INSTR_VRI_VVUUU] = { V_8, V_12, U12_16, U4_32, U4_28, 0 },
[INSTR_VRI_VVUUU2] = { V_8, V_12, U8_28, U8_16, U4_24, 0 }, [INSTR_VRI_VVUUU2] = { V_8, V_12, U8_28, U8_16, U4_24, 0 },
[INSTR_VRI_VVV0U] = { V_8, V_12, V_16, U8_24, 0, 0 }, [INSTR_VRI_VVV0U] = { V_8, V_12, V_16, U8_24, 0, 0 },
[INSTR_VRI_VVV0UU] = { V_8, V_12, V_16, U8_24, U4_32, 0 }, [INSTR_VRI_VVV0UU] = { V_8, V_12, V_16, U8_24, U4_32, 0 },
[INSTR_VRI_VVV0UU2] = { V_8, V_12, V_16, U8_28, U4_24, 0 }, [INSTR_VRI_VVV0UU2] = { V_8, V_12, V_16, U8_28, U4_24, 0 },
[INSTR_VRR_0V] = { V_12, 0, 0, 0, 0, 0 }, [INSTR_VRI_VVV0UV] = { V_8, V_12, V_16, V_32, U8_24, 0 },
[INSTR_VRR_0V0U] = { V_12, U16_20, 0, 0, 0, 0 },
[INSTR_VRR_0VV0U] = { V_12, V_16, U4_24, 0, 0, 0 }, [INSTR_VRR_0VV0U] = { V_12, V_16, U4_24, 0, 0, 0 },
[INSTR_VRR_0VVU] = { V_12, V_16, U16_20, 0, 0, 0 },
[INSTR_VRR_RV0UU] = { R_8, V_12, U4_24, U4_28, 0, 0 }, [INSTR_VRR_RV0UU] = { R_8, V_12, U4_24, U4_28, 0, 0 },
[INSTR_VRR_VRR] = { V_8, R_12, R_16, 0, 0, 0 }, [INSTR_VRR_VRR] = { V_8, R_12, R_16, 0, 0, 0 },
[INSTR_VRR_VV] = { V_8, V_12, 0, 0, 0, 0 }, [INSTR_VRR_VV] = { V_8, V_12, 0, 0, 0, 0 },
...@@ -455,21 +459,21 @@ static int print_insn(char *buffer, unsigned char *code, unsigned long addr) ...@@ -455,21 +459,21 @@ static int print_insn(char *buffer, unsigned char *code, unsigned long addr)
if (separator) if (separator)
ptr += sprintf(ptr, "%c", separator); ptr += sprintf(ptr, "%c", separator);
if (operand->flags & OPERAND_GPR) if (operand->flags & OPERAND_GPR)
ptr += sprintf(ptr, "%%r%i", value); ptr += sprintf(ptr, "%%r%u", value);
else if (operand->flags & OPERAND_FPR) else if (operand->flags & OPERAND_FPR)
ptr += sprintf(ptr, "%%f%i", value); ptr += sprintf(ptr, "%%f%u", value);
else if (operand->flags & OPERAND_AR) else if (operand->flags & OPERAND_AR)
ptr += sprintf(ptr, "%%a%i", value); ptr += sprintf(ptr, "%%a%u", value);
else if (operand->flags & OPERAND_CR) else if (operand->flags & OPERAND_CR)
ptr += sprintf(ptr, "%%c%i", value); ptr += sprintf(ptr, "%%c%u", value);
else if (operand->flags & OPERAND_VR) else if (operand->flags & OPERAND_VR)
ptr += sprintf(ptr, "%%v%i", value); ptr += sprintf(ptr, "%%v%u", value);
else if (operand->flags & OPERAND_PCREL) { else if (operand->flags & OPERAND_PCREL) {
void *pcrel = (void *)((int)value + addr); void *pcrel = (void *)((int)value + addr);
ptr += sprintf(ptr, "%px", pcrel); ptr += sprintf(ptr, "%px", pcrel);
} else if (operand->flags & OPERAND_SIGNED) } else if (operand->flags & OPERAND_SIGNED)
ptr += sprintf(ptr, "%i", value); ptr += sprintf(ptr, "%i", (int)value);
else else
ptr += sprintf(ptr, "%u", value); ptr += sprintf(ptr, "%u", value);
if (operand->flags & OPERAND_DISP) if (operand->flags & OPERAND_DISP)
......
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
#define KMSG_COMPONENT "setup" #define KMSG_COMPONENT "setup"
#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
#include <linux/sched/debug.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/errno.h> #include <linux/errno.h>
...@@ -175,20 +176,45 @@ static __init void setup_topology(void) ...@@ -175,20 +176,45 @@ static __init void setup_topology(void)
topology_max_mnest = max_mnest; topology_max_mnest = max_mnest;
} }
void __do_early_pgm_check(struct pt_regs *regs) void __init __do_early_pgm_check(struct pt_regs *regs)
{ {
if (!fixup_exception(regs)) struct lowcore *lc = get_lowcore();
disabled_wait(); unsigned long ip;
regs->int_code = lc->pgm_int_code;
regs->int_parm_long = lc->trans_exc_code;
ip = __rewind_psw(regs->psw, regs->int_code >> 16);
/* Monitor Event? Might be a warning */
if ((regs->int_code & PGM_INT_CODE_MASK) == 0x40) {
if (report_bug(ip, regs) == BUG_TRAP_TYPE_WARN)
return;
}
if (fixup_exception(regs))
return;
/*
* Unhandled exception - system cannot continue but try to get some
* helpful messages to the console. Use early_printk() to print
* some basic information in case it is too early for printk().
*/
register_early_console();
early_printk("PANIC: early exception %04x PSW: %016lx %016lx\n",
regs->int_code & 0xffff, regs->psw.mask, regs->psw.addr);
show_regs(regs);
disabled_wait();
} }
static noinline __init void setup_lowcore_early(void) static noinline __init void setup_lowcore_early(void)
{ {
struct lowcore *lc = get_lowcore();
psw_t psw; psw_t psw;
psw.addr = (unsigned long)early_pgm_check_handler; psw.addr = (unsigned long)early_pgm_check_handler;
psw.mask = PSW_KERNEL_BITS; psw.mask = PSW_KERNEL_BITS;
get_lowcore()->program_new_psw = psw; lc->program_new_psw = psw;
get_lowcore()->preempt_count = INIT_PREEMPT_COUNT; lc->preempt_count = INIT_PREEMPT_COUNT;
lc->return_lpswe = gen_lpswe(__LC_RETURN_PSW);
lc->return_mcck_lpswe = gen_lpswe(__LC_RETURN_MCCK_PSW);
} }
static __init void detect_diag9c(void) static __init void detect_diag9c(void)
...@@ -242,6 +268,8 @@ static __init void detect_machine_facilities(void) ...@@ -242,6 +268,8 @@ static __init void detect_machine_facilities(void)
} }
if (test_facility(194)) if (test_facility(194))
get_lowcore()->machine_flags |= MACHINE_FLAG_RDP; get_lowcore()->machine_flags |= MACHINE_FLAG_RDP;
if (test_facility(85))
get_lowcore()->machine_flags |= MACHINE_FLAG_SEQ_INSN;
} }
static inline void save_vector_registers(void) static inline void save_vector_registers(void)
......
...@@ -6,6 +6,7 @@ ...@@ -6,6 +6,7 @@
#include <linux/console.h> #include <linux/console.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/init.h> #include <linux/init.h>
#include <asm/setup.h>
#include <asm/sclp.h> #include <asm/sclp.h>
static void sclp_early_write(struct console *con, const char *s, unsigned int len) static void sclp_early_write(struct console *con, const char *s, unsigned int len)
...@@ -20,6 +21,16 @@ static struct console sclp_early_console = { ...@@ -20,6 +21,16 @@ static struct console sclp_early_console = {
.index = -1, .index = -1,
}; };
void __init register_early_console(void)
{
if (early_console)
return;
if (!sclp.has_linemode && !sclp.has_vt220)
return;
early_console = &sclp_early_console;
register_console(early_console);
}
static int __init setup_early_printk(char *buf) static int __init setup_early_printk(char *buf)
{ {
if (early_console) if (early_console)
...@@ -27,10 +38,7 @@ static int __init setup_early_printk(char *buf) ...@@ -27,10 +38,7 @@ static int __init setup_early_printk(char *buf)
/* Accept only "earlyprintk" and "earlyprintk=sclp" */ /* Accept only "earlyprintk" and "earlyprintk=sclp" */
if (buf && !str_has_prefix(buf, "sclp")) if (buf && !str_has_prefix(buf, "sclp"))
return 0; return 0;
if (!sclp.has_linemode && !sclp.has_vt220) register_early_console();
return 0;
early_console = &sclp_early_console;
register_console(early_console);
return 0; return 0;
} }
early_param("earlyprintk", setup_early_printk); early_param("earlyprintk", setup_early_printk);
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright IBM Corp. 2006, 2007
* Author(s): Michael Holzheu <holzheu@de.ibm.com>
*/
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
SYM_CODE_START(early_pgm_check_handler)
stmg %r8,%r15,__LC_SAVE_AREA_SYNC
aghi %r15,-(STACK_FRAME_OVERHEAD+__PT_SIZE)
la %r11,STACK_FRAME_OVERHEAD(%r15)
xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
stmg %r0,%r7,__PT_R0(%r11)
mvc __PT_PSW(16,%r11),__LC_PGM_OLD_PSW
mvc __PT_R8(64,%r11),__LC_SAVE_AREA_SYNC
lgr %r2,%r11
brasl %r14,__do_early_pgm_check
mvc __LC_RETURN_PSW(16),STACK_FRAME_OVERHEAD+__PT_PSW(%r15)
lmg %r0,%r15,STACK_FRAME_OVERHEAD+__PT_R0(%r15)
lpswe __LC_RETURN_PSW
SYM_CODE_END(early_pgm_check_handler)
...@@ -264,7 +264,7 @@ EXPORT_SYMBOL(sie_exit) ...@@ -264,7 +264,7 @@ EXPORT_SYMBOL(sie_exit)
*/ */
SYM_CODE_START(system_call) SYM_CODE_START(system_call)
STMG_LC %r8,%r15,__LC_SAVE_AREA_SYNC STMG_LC %r8,%r15,__LC_SAVE_AREA
GET_LC %r13 GET_LC %r13
stpt __LC_SYS_ENTER_TIMER(%r13) stpt __LC_SYS_ENTER_TIMER(%r13)
BPOFF BPOFF
...@@ -287,7 +287,7 @@ SYM_CODE_START(system_call) ...@@ -287,7 +287,7 @@ SYM_CODE_START(system_call)
xgr %r10,%r10 xgr %r10,%r10
xgr %r11,%r11 xgr %r11,%r11
la %r2,STACK_FRAME_OVERHEAD(%r15) # pointer to pt_regs la %r2,STACK_FRAME_OVERHEAD(%r15) # pointer to pt_regs
mvc __PT_R8(64,%r2),__LC_SAVE_AREA_SYNC(%r13) mvc __PT_R8(64,%r2),__LC_SAVE_AREA(%r13)
MBEAR %r2,%r13 MBEAR %r2,%r13
lgr %r3,%r14 lgr %r3,%r14
brasl %r14,__do_syscall brasl %r14,__do_syscall
...@@ -323,7 +323,7 @@ SYM_CODE_END(ret_from_fork) ...@@ -323,7 +323,7 @@ SYM_CODE_END(ret_from_fork)
*/ */
SYM_CODE_START(pgm_check_handler) SYM_CODE_START(pgm_check_handler)
STMG_LC %r8,%r15,__LC_SAVE_AREA_SYNC STMG_LC %r8,%r15,__LC_SAVE_AREA
GET_LC %r13 GET_LC %r13
stpt __LC_SYS_ENTER_TIMER(%r13) stpt __LC_SYS_ENTER_TIMER(%r13)
BPOFF BPOFF
...@@ -338,16 +338,16 @@ SYM_CODE_START(pgm_check_handler) ...@@ -338,16 +338,16 @@ SYM_CODE_START(pgm_check_handler)
jnz 2f # -> enabled, can't be a double fault jnz 2f # -> enabled, can't be a double fault
tm __LC_PGM_ILC+3(%r13),0x80 # check for per exception tm __LC_PGM_ILC+3(%r13),0x80 # check for per exception
jnz .Lpgm_svcper # -> single stepped svc jnz .Lpgm_svcper # -> single stepped svc
2: CHECK_STACK __LC_SAVE_AREA_SYNC,%r13 2: CHECK_STACK __LC_SAVE_AREA,%r13
aghi %r15,-(STACK_FRAME_OVERHEAD + __PT_SIZE) aghi %r15,-(STACK_FRAME_OVERHEAD + __PT_SIZE)
# CHECK_VMAP_STACK branches to stack_overflow or 4f # CHECK_VMAP_STACK branches to stack_overflow or 4f
CHECK_VMAP_STACK __LC_SAVE_AREA_SYNC,%r13,4f CHECK_VMAP_STACK __LC_SAVE_AREA,%r13,4f
3: lg %r15,__LC_KERNEL_STACK(%r13) 3: lg %r15,__LC_KERNEL_STACK(%r13)
4: la %r11,STACK_FRAME_OVERHEAD(%r15) 4: la %r11,STACK_FRAME_OVERHEAD(%r15)
xc __PT_FLAGS(8,%r11),__PT_FLAGS(%r11) xc __PT_FLAGS(8,%r11),__PT_FLAGS(%r11)
xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15) xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
stmg %r0,%r7,__PT_R0(%r11) stmg %r0,%r7,__PT_R0(%r11)
mvc __PT_R8(64,%r11),__LC_SAVE_AREA_SYNC(%r13) mvc __PT_R8(64,%r11),__LC_SAVE_AREA(%r13)
mvc __PT_LAST_BREAK(8,%r11),__LC_PGM_LAST_BREAK(%r13) mvc __PT_LAST_BREAK(8,%r11),__LC_PGM_LAST_BREAK(%r13)
stctg %c1,%c1,__PT_CR1(%r11) stctg %c1,%c1,__PT_CR1(%r11)
#if IS_ENABLED(CONFIG_KVM) #if IS_ENABLED(CONFIG_KVM)
...@@ -398,7 +398,7 @@ SYM_CODE_END(pgm_check_handler) ...@@ -398,7 +398,7 @@ SYM_CODE_END(pgm_check_handler)
*/ */
.macro INT_HANDLER name,lc_old_psw,handler .macro INT_HANDLER name,lc_old_psw,handler
SYM_CODE_START(\name) SYM_CODE_START(\name)
STMG_LC %r8,%r15,__LC_SAVE_AREA_ASYNC STMG_LC %r8,%r15,__LC_SAVE_AREA
GET_LC %r13 GET_LC %r13
stckf __LC_INT_CLOCK(%r13) stckf __LC_INT_CLOCK(%r13)
stpt __LC_SYS_ENTER_TIMER(%r13) stpt __LC_SYS_ENTER_TIMER(%r13)
...@@ -414,7 +414,7 @@ SYM_CODE_START(\name) ...@@ -414,7 +414,7 @@ SYM_CODE_START(\name)
BPENTER __SF_SIE_FLAGS(%r15),_TIF_ISOLATE_BP_GUEST BPENTER __SF_SIE_FLAGS(%r15),_TIF_ISOLATE_BP_GUEST
SIEEXIT __SF_SIE_CONTROL(%r15),%r13 SIEEXIT __SF_SIE_CONTROL(%r15),%r13
#endif #endif
0: CHECK_STACK __LC_SAVE_AREA_ASYNC,%r13 0: CHECK_STACK __LC_SAVE_AREA,%r13
aghi %r15,-(STACK_FRAME_OVERHEAD + __PT_SIZE) aghi %r15,-(STACK_FRAME_OVERHEAD + __PT_SIZE)
j 2f j 2f
1: lctlg %c1,%c1,__LC_KERNEL_ASCE(%r13) 1: lctlg %c1,%c1,__LC_KERNEL_ASCE(%r13)
...@@ -432,7 +432,7 @@ SYM_CODE_START(\name) ...@@ -432,7 +432,7 @@ SYM_CODE_START(\name)
xgr %r7,%r7 xgr %r7,%r7
xgr %r10,%r10 xgr %r10,%r10
xc __PT_FLAGS(8,%r11),__PT_FLAGS(%r11) xc __PT_FLAGS(8,%r11),__PT_FLAGS(%r11)
mvc __PT_R8(64,%r11),__LC_SAVE_AREA_ASYNC(%r13) mvc __PT_R8(64,%r11),__LC_SAVE_AREA(%r13)
MBEAR %r11,%r13 MBEAR %r11,%r13
stmg %r8,%r9,__PT_PSW(%r11) stmg %r8,%r9,__PT_PSW(%r11)
lgr %r2,%r11 # pass pointer to pt_regs lgr %r2,%r11 # pass pointer to pt_regs
...@@ -599,6 +599,24 @@ SYM_CODE_START(restart_int_handler) ...@@ -599,6 +599,24 @@ SYM_CODE_START(restart_int_handler)
3: j 3b 3: j 3b
SYM_CODE_END(restart_int_handler) SYM_CODE_END(restart_int_handler)
__INIT
SYM_CODE_START(early_pgm_check_handler)
STMG_LC %r8,%r15,__LC_SAVE_AREA
GET_LC %r13
aghi %r15,-(STACK_FRAME_OVERHEAD+__PT_SIZE)
la %r11,STACK_FRAME_OVERHEAD(%r15)
xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
stmg %r0,%r7,__PT_R0(%r11)
mvc __PT_PSW(16,%r11),__LC_PGM_OLD_PSW(%r13)
mvc __PT_R8(64,%r11),__LC_SAVE_AREA(%r13)
lgr %r2,%r11
brasl %r14,__do_early_pgm_check
mvc __LC_RETURN_PSW(16,%r13),STACK_FRAME_OVERHEAD+__PT_PSW(%r15)
lmg %r0,%r15,STACK_FRAME_OVERHEAD+__PT_R0(%r15)
LPSWEY __LC_RETURN_PSW,__LC_RETURN_LPSWE
SYM_CODE_END(early_pgm_check_handler)
__FINIT
.section .kprobes.text, "ax" .section .kprobes.text, "ax"
#if defined(CONFIG_CHECK_STACK) || defined(CONFIG_VMAP_STACK) #if defined(CONFIG_CHECK_STACK) || defined(CONFIG_VMAP_STACK)
......
...@@ -50,10 +50,6 @@ struct ftrace_insn { ...@@ -50,10 +50,6 @@ struct ftrace_insn {
s32 disp; s32 disp;
} __packed; } __packed;
#ifdef CONFIG_MODULES
static char *ftrace_plt;
#endif /* CONFIG_MODULES */
static const char *ftrace_shared_hotpatch_trampoline(const char **end) static const char *ftrace_shared_hotpatch_trampoline(const char **end)
{ {
const char *tstart, *tend; const char *tstart, *tend;
...@@ -73,19 +69,20 @@ static const char *ftrace_shared_hotpatch_trampoline(const char **end) ...@@ -73,19 +69,20 @@ static const char *ftrace_shared_hotpatch_trampoline(const char **end)
bool ftrace_need_init_nop(void) bool ftrace_need_init_nop(void)
{ {
return true; return !MACHINE_HAS_SEQ_INSN;
} }
int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec) int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
{ {
static struct ftrace_hotpatch_trampoline *next_vmlinux_trampoline = static struct ftrace_hotpatch_trampoline *next_vmlinux_trampoline =
__ftrace_hotpatch_trampolines_start; __ftrace_hotpatch_trampolines_start;
static const char orig[6] = { 0xc0, 0x04, 0x00, 0x00, 0x00, 0x00 }; static const struct ftrace_insn orig = { .opc = 0xc004, .disp = 0 };
static struct ftrace_hotpatch_trampoline *trampoline; static struct ftrace_hotpatch_trampoline *trampoline;
struct ftrace_hotpatch_trampoline **next_trampoline; struct ftrace_hotpatch_trampoline **next_trampoline;
struct ftrace_hotpatch_trampoline *trampolines_end; struct ftrace_hotpatch_trampoline *trampolines_end;
struct ftrace_hotpatch_trampoline tmp; struct ftrace_hotpatch_trampoline tmp;
struct ftrace_insn *insn; struct ftrace_insn *insn;
struct ftrace_insn old;
const char *shared; const char *shared;
s32 disp; s32 disp;
...@@ -99,7 +96,6 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec) ...@@ -99,7 +96,6 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
if (mod) { if (mod) {
next_trampoline = &mod->arch.next_trampoline; next_trampoline = &mod->arch.next_trampoline;
trampolines_end = mod->arch.trampolines_end; trampolines_end = mod->arch.trampolines_end;
shared = ftrace_plt;
} }
#endif #endif
...@@ -107,8 +103,10 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec) ...@@ -107,8 +103,10 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
return -ENOMEM; return -ENOMEM;
trampoline = (*next_trampoline)++; trampoline = (*next_trampoline)++;
if (copy_from_kernel_nofault(&old, (void *)rec->ip, sizeof(old)))
return -EFAULT;
/* Check for the compiler-generated fentry nop (brcl 0, .). */ /* Check for the compiler-generated fentry nop (brcl 0, .). */
if (WARN_ON_ONCE(memcmp((const void *)rec->ip, &orig, sizeof(orig)))) if (WARN_ON_ONCE(memcmp(&orig, &old, sizeof(old))))
return -EINVAL; return -EINVAL;
/* Generate the trampoline. */ /* Generate the trampoline. */
...@@ -144,8 +142,35 @@ static struct ftrace_hotpatch_trampoline *ftrace_get_trampoline(struct dyn_ftrac ...@@ -144,8 +142,35 @@ static struct ftrace_hotpatch_trampoline *ftrace_get_trampoline(struct dyn_ftrac
return trampoline; return trampoline;
} }
int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, static inline struct ftrace_insn
unsigned long addr) ftrace_generate_branch_insn(unsigned long ip, unsigned long target)
{
/* brasl r0,target or brcl 0,0 */
return (struct ftrace_insn){ .opc = target ? 0xc005 : 0xc004,
.disp = target ? (target - ip) / 2 : 0 };
}
static int ftrace_patch_branch_insn(unsigned long ip, unsigned long old_target,
unsigned long target)
{
struct ftrace_insn orig = ftrace_generate_branch_insn(ip, old_target);
struct ftrace_insn new = ftrace_generate_branch_insn(ip, target);
struct ftrace_insn old;
if (!IS_ALIGNED(ip, 8))
return -EINVAL;
if (copy_from_kernel_nofault(&old, (void *)ip, sizeof(old)))
return -EFAULT;
/* Verify that the to be replaced code matches what we expect. */
if (memcmp(&orig, &old, sizeof(old)))
return -EINVAL;
s390_kernel_write((void *)ip, &new, sizeof(new));
return 0;
}
static int ftrace_modify_trampoline_call(struct dyn_ftrace *rec,
unsigned long old_addr,
unsigned long addr)
{ {
struct ftrace_hotpatch_trampoline *trampoline; struct ftrace_hotpatch_trampoline *trampoline;
u64 old; u64 old;
...@@ -161,6 +186,15 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, ...@@ -161,6 +186,15 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
return 0; return 0;
} }
int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
unsigned long addr)
{
if (MACHINE_HAS_SEQ_INSN)
return ftrace_patch_branch_insn(rec->ip, old_addr, addr);
else
return ftrace_modify_trampoline_call(rec, old_addr, addr);
}
static int ftrace_patch_branch_mask(void *addr, u16 expected, bool enable) static int ftrace_patch_branch_mask(void *addr, u16 expected, bool enable)
{ {
u16 old; u16 old;
...@@ -179,11 +213,14 @@ static int ftrace_patch_branch_mask(void *addr, u16 expected, bool enable) ...@@ -179,11 +213,14 @@ static int ftrace_patch_branch_mask(void *addr, u16 expected, bool enable)
int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
unsigned long addr) unsigned long addr)
{ {
/* Expect brcl 0xf,... */ /* Expect brcl 0xf,... for the !MACHINE_HAS_SEQ_INSN case */
return ftrace_patch_branch_mask((void *)rec->ip, 0xc0f4, false); if (MACHINE_HAS_SEQ_INSN)
return ftrace_patch_branch_insn(rec->ip, addr, 0);
else
return ftrace_patch_branch_mask((void *)rec->ip, 0xc0f4, false);
} }
int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) static int ftrace_make_trampoline_call(struct dyn_ftrace *rec, unsigned long addr)
{ {
struct ftrace_hotpatch_trampoline *trampoline; struct ftrace_hotpatch_trampoline *trampoline;
...@@ -195,6 +232,14 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) ...@@ -195,6 +232,14 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
return ftrace_patch_branch_mask((void *)rec->ip, 0xc004, true); return ftrace_patch_branch_mask((void *)rec->ip, 0xc004, true);
} }
int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
{
if (MACHINE_HAS_SEQ_INSN)
return ftrace_patch_branch_insn(rec->ip, 0, addr);
else
return ftrace_make_trampoline_call(rec, addr);
}
int ftrace_update_ftrace_func(ftrace_func_t func) int ftrace_update_ftrace_func(ftrace_func_t func)
{ {
ftrace_func = func; ftrace_func = func;
...@@ -215,25 +260,6 @@ void ftrace_arch_code_modify_post_process(void) ...@@ -215,25 +260,6 @@ void ftrace_arch_code_modify_post_process(void)
text_poke_sync_lock(); text_poke_sync_lock();
} }
#ifdef CONFIG_MODULES
static int __init ftrace_plt_init(void)
{
const char *start, *end;
ftrace_plt = execmem_alloc(EXECMEM_FTRACE, PAGE_SIZE);
if (!ftrace_plt)
panic("cannot allocate ftrace plt\n");
start = ftrace_shared_hotpatch_trampoline(&end);
memcpy(ftrace_plt, start, end - start);
set_memory_rox((unsigned long)ftrace_plt, 1);
return 0;
}
device_initcall(ftrace_plt_init);
#endif /* CONFIG_MODULES */
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
/* /*
* Hook the return address and push it in the stack of return addresses * Hook the return address and push it in the stack of return addresses
...@@ -264,26 +290,14 @@ NOKPROBE_SYMBOL(prepare_ftrace_return); ...@@ -264,26 +290,14 @@ NOKPROBE_SYMBOL(prepare_ftrace_return);
*/ */
int ftrace_enable_ftrace_graph_caller(void) int ftrace_enable_ftrace_graph_caller(void)
{ {
int rc;
/* Expect brc 0xf,... */ /* Expect brc 0xf,... */
rc = ftrace_patch_branch_mask(ftrace_graph_caller, 0xa7f4, false); return ftrace_patch_branch_mask(ftrace_graph_caller, 0xa7f4, false);
if (rc)
return rc;
text_poke_sync_lock();
return 0;
} }
int ftrace_disable_ftrace_graph_caller(void) int ftrace_disable_ftrace_graph_caller(void)
{ {
int rc;
/* Expect brc 0x0,... */ /* Expect brc 0x0,... */
rc = ftrace_patch_branch_mask(ftrace_graph_caller, 0xa704, true); return ftrace_patch_branch_mask(ftrace_graph_caller, 0xa704, true);
if (rc)
return rc;
text_poke_sync_lock();
return 0;
} }
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
......
...@@ -18,7 +18,5 @@ extern const char ftrace_shared_hotpatch_trampoline_br[]; ...@@ -18,7 +18,5 @@ extern const char ftrace_shared_hotpatch_trampoline_br[];
extern const char ftrace_shared_hotpatch_trampoline_br_end[]; extern const char ftrace_shared_hotpatch_trampoline_br_end[];
extern const char ftrace_shared_hotpatch_trampoline_exrl[]; extern const char ftrace_shared_hotpatch_trampoline_exrl[];
extern const char ftrace_shared_hotpatch_trampoline_exrl_end[]; extern const char ftrace_shared_hotpatch_trampoline_exrl_end[];
extern const char ftrace_plt_template[];
extern const char ftrace_plt_template_end[];
#endif /* _FTRACE_H */ #endif /* _FTRACE_H */
This diff is collapsed.
...@@ -76,6 +76,7 @@ static const struct irq_class irqclass_sub_desc[] = { ...@@ -76,6 +76,7 @@ static const struct irq_class irqclass_sub_desc[] = {
{.irq = IRQEXT_CMS, .name = "CMS", .desc = "[EXT] CPU-Measurement: Sampling"}, {.irq = IRQEXT_CMS, .name = "CMS", .desc = "[EXT] CPU-Measurement: Sampling"},
{.irq = IRQEXT_CMC, .name = "CMC", .desc = "[EXT] CPU-Measurement: Counter"}, {.irq = IRQEXT_CMC, .name = "CMC", .desc = "[EXT] CPU-Measurement: Counter"},
{.irq = IRQEXT_FTP, .name = "FTP", .desc = "[EXT] HMC FTP Service"}, {.irq = IRQEXT_FTP, .name = "FTP", .desc = "[EXT] HMC FTP Service"},
{.irq = IRQEXT_WTI, .name = "WTI", .desc = "[EXT] Warning Track"},
{.irq = IRQIO_CIO, .name = "CIO", .desc = "[I/O] Common I/O Layer Interrupt"}, {.irq = IRQIO_CIO, .name = "CIO", .desc = "[I/O] Common I/O Layer Interrupt"},
{.irq = IRQIO_DAS, .name = "DAS", .desc = "[I/O] DASD"}, {.irq = IRQIO_DAS, .name = "DAS", .desc = "[I/O] DASD"},
{.irq = IRQIO_C15, .name = "C15", .desc = "[I/O] 3215"}, {.irq = IRQIO_C15, .name = "C15", .desc = "[I/O] 3215"},
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#include <linux/hardirq.h> #include <linux/hardirq.h>
#include <linux/ftrace.h> #include <linux/ftrace.h>
#include <linux/execmem.h> #include <linux/execmem.h>
#include <asm/text-patching.h>
#include <asm/set_memory.h> #include <asm/set_memory.h>
#include <asm/sections.h> #include <asm/sections.h>
#include <asm/dis.h> #include <asm/dis.h>
...@@ -152,7 +153,12 @@ void arch_arm_kprobe(struct kprobe *p) ...@@ -152,7 +153,12 @@ void arch_arm_kprobe(struct kprobe *p)
{ {
struct swap_insn_args args = {.p = p, .arm_kprobe = 1}; struct swap_insn_args args = {.p = p, .arm_kprobe = 1};
stop_machine_cpuslocked(swap_instruction, &args, NULL); if (MACHINE_HAS_SEQ_INSN) {
swap_instruction(&args);
text_poke_sync();
} else {
stop_machine_cpuslocked(swap_instruction, &args, NULL);
}
} }
NOKPROBE_SYMBOL(arch_arm_kprobe); NOKPROBE_SYMBOL(arch_arm_kprobe);
...@@ -160,7 +166,12 @@ void arch_disarm_kprobe(struct kprobe *p) ...@@ -160,7 +166,12 @@ void arch_disarm_kprobe(struct kprobe *p)
{ {
struct swap_insn_args args = {.p = p, .arm_kprobe = 0}; struct swap_insn_args args = {.p = p, .arm_kprobe = 0};
stop_machine_cpuslocked(swap_instruction, &args, NULL); if (MACHINE_HAS_SEQ_INSN) {
swap_instruction(&args);
text_poke_sync();
} else {
stop_machine_cpuslocked(swap_instruction, &args, NULL);
}
} }
NOKPROBE_SYMBOL(arch_disarm_kprobe); NOKPROBE_SYMBOL(arch_disarm_kprobe);
......
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
#include <asm/ftrace.h> #include <asm/ftrace.h>
#include <asm/nospec-insn.h> #include <asm/nospec-insn.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/march.h>
#define STACK_FRAME_SIZE_PTREGS (STACK_FRAME_OVERHEAD + __PT_SIZE) #define STACK_FRAME_SIZE_PTREGS (STACK_FRAME_OVERHEAD + __PT_SIZE)
#define STACK_PTREGS (STACK_FRAME_OVERHEAD) #define STACK_PTREGS (STACK_FRAME_OVERHEAD)
...@@ -88,7 +89,7 @@ SYM_CODE_START(ftrace_caller) ...@@ -88,7 +89,7 @@ SYM_CODE_START(ftrace_caller)
SYM_CODE_END(ftrace_caller) SYM_CODE_END(ftrace_caller)
SYM_CODE_START(ftrace_common) SYM_CODE_START(ftrace_common)
#ifdef CONFIG_HAVE_MARCH_Z196_FEATURES #ifdef MARCH_HAS_Z196_FEATURES
aghik %r2,%r0,-MCOUNT_INSN_SIZE aghik %r2,%r0,-MCOUNT_INSN_SIZE
lgrl %r4,function_trace_op lgrl %r4,function_trace_op
lgrl %r1,ftrace_func lgrl %r1,ftrace_func
...@@ -115,7 +116,7 @@ SYM_INNER_LABEL(ftrace_graph_caller, SYM_L_GLOBAL) ...@@ -115,7 +116,7 @@ SYM_INNER_LABEL(ftrace_graph_caller, SYM_L_GLOBAL)
.Lftrace_graph_caller_end: .Lftrace_graph_caller_end:
#endif #endif
lg %r0,(STACK_FREGS_PTREGS_PSW+8)(%r15) lg %r0,(STACK_FREGS_PTREGS_PSW+8)(%r15)
#ifdef CONFIG_HAVE_MARCH_Z196_FEATURES #ifdef MARCH_HAS_Z196_FEATURES
ltg %r1,STACK_FREGS_PTREGS_ORIG_GPR2(%r15) ltg %r1,STACK_FREGS_PTREGS_ORIG_GPR2(%r15)
locgrz %r1,%r0 locgrz %r1,%r0
#else #else
......
...@@ -22,6 +22,10 @@ ...@@ -22,6 +22,10 @@
#include <asm/hwctrset.h> #include <asm/hwctrset.h>
#include <asm/debug.h> #include <asm/debug.h>
/* Perf PMU definitions for the counter facility */
#define PERF_CPUM_CF_MAX_CTR 0xffffUL /* Max ctr for ECCTR */
#define PERF_EVENT_CPUM_CF_DIAG 0xBC000UL /* Event: Counter sets */
enum cpumf_ctr_set { enum cpumf_ctr_set {
CPUMF_CTR_SET_BASIC = 0, /* Basic Counter Set */ CPUMF_CTR_SET_BASIC = 0, /* Basic Counter Set */
CPUMF_CTR_SET_USER = 1, /* Problem-State Counter Set */ CPUMF_CTR_SET_USER = 1, /* Problem-State Counter Set */
......
This diff is collapsed.
...@@ -738,6 +738,22 @@ static const char * const paicrypt_ctrnames[] = { ...@@ -738,6 +738,22 @@ static const char * const paicrypt_ctrnames[] = {
[154] = "PCKMO_ENCRYPT_ECC_ED448_KEY", [154] = "PCKMO_ENCRYPT_ECC_ED448_KEY",
[155] = "IBM_RESERVED_155", [155] = "IBM_RESERVED_155",
[156] = "IBM_RESERVED_156", [156] = "IBM_RESERVED_156",
[157] = "KM_FULL_XTS_AES_128",
[158] = "KM_FULL_XTS_AES_256",
[159] = "KM_FULL_XTS_ENCRYPTED_AES_128",
[160] = "KM_FULL_XTS_ENCRYPTED_AES_256",
[161] = "KMAC_HMAC_SHA_224",
[162] = "KMAC_HMAC_SHA_256",
[163] = "KMAC_HMAC_SHA_384",
[164] = "KMAC_HMAC_SHA_512",
[165] = "KMAC_HMAC_ENCRYPTED_SHA_224",
[166] = "KMAC_HMAC_ENCRYPTED_SHA_256",
[167] = "KMAC_HMAC_ENCRYPTED_SHA_384",
[168] = "KMAC_HMAC_ENCRYPTED_SHA_512",
[169] = "PCKMO_ENCRYPT_HMAC_512_KEY",
[170] = "PCKMO_ENCRYPT_HMAC_1024_KEY",
[171] = "PCKMO_ENCRYPT_AES_XTS_128",
[172] = "PCKMO_ENCRYPT_AES_XTS_256",
}; };
static void __init attr_event_free(struct attribute **attrs, int num) static void __init attr_event_free(struct attribute **attrs, int num)
......
...@@ -635,6 +635,15 @@ static const char * const paiext_ctrnames[] = { ...@@ -635,6 +635,15 @@ static const char * const paiext_ctrnames[] = {
[25] = "NNPA_1MFRAME", [25] = "NNPA_1MFRAME",
[26] = "NNPA_2GFRAME", [26] = "NNPA_2GFRAME",
[27] = "NNPA_ACCESSEXCEPT", [27] = "NNPA_ACCESSEXCEPT",
[28] = "NNPA_TRANSFORM",
[29] = "NNPA_GELU",
[30] = "NNPA_MOMENTS",
[31] = "NNPA_LAYERNORM",
[32] = "NNPA_MATMUL_OP_BCAST1",
[33] = "NNPA_SQRT",
[34] = "NNPA_INVSQRT",
[35] = "NNPA_NORM",
[36] = "NNPA_REDUCE",
}; };
static void __init attr_event_free(struct attribute **attrs, int num) static void __init attr_event_free(struct attribute **attrs, int num)
......
...@@ -671,6 +671,25 @@ int smp_cpu_get_polarization(int cpu) ...@@ -671,6 +671,25 @@ int smp_cpu_get_polarization(int cpu)
return per_cpu(pcpu_devices, cpu).polarization; return per_cpu(pcpu_devices, cpu).polarization;
} }
void smp_cpu_set_capacity(int cpu, unsigned long val)
{
per_cpu(pcpu_devices, cpu).capacity = val;
}
unsigned long smp_cpu_get_capacity(int cpu)
{
return per_cpu(pcpu_devices, cpu).capacity;
}
void smp_set_core_capacity(int cpu, unsigned long val)
{
int i;
cpu = smp_get_base_cpu(cpu);
for (i = cpu; (i <= cpu + smp_cpu_mtid) && (i < nr_cpu_ids); i++)
smp_cpu_set_capacity(i, val);
}
int smp_cpu_get_cpu_address(int cpu) int smp_cpu_get_cpu_address(int cpu)
{ {
return per_cpu(pcpu_devices, cpu).address; return per_cpu(pcpu_devices, cpu).address;
...@@ -719,6 +738,7 @@ static int smp_add_core(struct sclp_core_entry *core, cpumask_t *avail, ...@@ -719,6 +738,7 @@ static int smp_add_core(struct sclp_core_entry *core, cpumask_t *avail,
else else
pcpu->state = CPU_STATE_STANDBY; pcpu->state = CPU_STATE_STANDBY;
smp_cpu_set_polarization(cpu, POLARIZATION_UNKNOWN); smp_cpu_set_polarization(cpu, POLARIZATION_UNKNOWN);
smp_cpu_set_capacity(cpu, CPU_CAPACITY_HIGH);
set_cpu_present(cpu, true); set_cpu_present(cpu, true);
if (!early && arch_register_cpu(cpu)) if (!early && arch_register_cpu(cpu))
set_cpu_present(cpu, false); set_cpu_present(cpu, false);
...@@ -961,6 +981,7 @@ void __init smp_prepare_boot_cpu(void) ...@@ -961,6 +981,7 @@ void __init smp_prepare_boot_cpu(void)
ipl_pcpu->state = CPU_STATE_CONFIGURED; ipl_pcpu->state = CPU_STATE_CONFIGURED;
lc->pcpu = (unsigned long)ipl_pcpu; lc->pcpu = (unsigned long)ipl_pcpu;
smp_cpu_set_polarization(0, POLARIZATION_UNKNOWN); smp_cpu_set_polarization(0, POLARIZATION_UNKNOWN);
smp_cpu_set_capacity(0, CPU_CAPACITY_HIGH);
} }
void __init smp_setup_processor_id(void) void __init smp_setup_processor_id(void)
......
...@@ -162,22 +162,3 @@ void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie, ...@@ -162,22 +162,3 @@ void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
{ {
arch_stack_walk_user_common(consume_entry, cookie, NULL, regs, false); arch_stack_walk_user_common(consume_entry, cookie, NULL, regs, false);
} }
unsigned long return_address(unsigned int n)
{
struct unwind_state state;
unsigned long addr;
/* Increment to skip current stack entry */
n++;
unwind_for_each_frame(&state, NULL, NULL, 0) {
addr = unwind_get_return_address(&state);
if (!addr)
break;
if (!n--)
return addr;
}
return 0;
}
EXPORT_SYMBOL_GPL(return_address);
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/nodemask.h> #include <linux/nodemask.h>
#include <linux/node.h> #include <linux/node.h>
#include <asm/hiperdispatch.h>
#include <asm/sysinfo.h> #include <asm/sysinfo.h>
#define PTF_HORIZONTAL (0UL) #define PTF_HORIZONTAL (0UL)
...@@ -47,6 +48,7 @@ static int topology_mode = TOPOLOGY_MODE_UNINITIALIZED; ...@@ -47,6 +48,7 @@ static int topology_mode = TOPOLOGY_MODE_UNINITIALIZED;
static void set_topology_timer(void); static void set_topology_timer(void);
static void topology_work_fn(struct work_struct *work); static void topology_work_fn(struct work_struct *work);
static struct sysinfo_15_1_x *tl_info; static struct sysinfo_15_1_x *tl_info;
static int cpu_management;
static DECLARE_WORK(topology_work, topology_work_fn); static DECLARE_WORK(topology_work, topology_work_fn);
...@@ -144,6 +146,7 @@ static void add_cpus_to_mask(struct topology_core *tl_core, ...@@ -144,6 +146,7 @@ static void add_cpus_to_mask(struct topology_core *tl_core,
cpumask_set_cpu(cpu, &book->mask); cpumask_set_cpu(cpu, &book->mask);
cpumask_set_cpu(cpu, &socket->mask); cpumask_set_cpu(cpu, &socket->mask);
smp_cpu_set_polarization(cpu, tl_core->pp); smp_cpu_set_polarization(cpu, tl_core->pp);
smp_cpu_set_capacity(cpu, CPU_CAPACITY_HIGH);
} }
} }
} }
...@@ -270,6 +273,7 @@ void update_cpu_masks(void) ...@@ -270,6 +273,7 @@ void update_cpu_masks(void)
topo->drawer_id = id; topo->drawer_id = id;
} }
} }
hd_reset_state();
for_each_online_cpu(cpu) { for_each_online_cpu(cpu) {
topo = &cpu_topology[cpu]; topo = &cpu_topology[cpu];
pkg_first = cpumask_first(&topo->core_mask); pkg_first = cpumask_first(&topo->core_mask);
...@@ -278,8 +282,10 @@ void update_cpu_masks(void) ...@@ -278,8 +282,10 @@ void update_cpu_masks(void)
for_each_cpu(sibling, &topo->core_mask) { for_each_cpu(sibling, &topo->core_mask) {
topo_sibling = &cpu_topology[sibling]; topo_sibling = &cpu_topology[sibling];
smt_first = cpumask_first(&topo_sibling->thread_mask); smt_first = cpumask_first(&topo_sibling->thread_mask);
if (sibling == smt_first) if (sibling == smt_first) {
topo_package->booted_cores++; topo_package->booted_cores++;
hd_add_core(sibling);
}
} }
} else { } else {
topo->booted_cores = topo_package->booted_cores; topo->booted_cores = topo_package->booted_cores;
...@@ -303,8 +309,10 @@ static void __arch_update_dedicated_flag(void *arg) ...@@ -303,8 +309,10 @@ static void __arch_update_dedicated_flag(void *arg)
static int __arch_update_cpu_topology(void) static int __arch_update_cpu_topology(void)
{ {
struct sysinfo_15_1_x *info = tl_info; struct sysinfo_15_1_x *info = tl_info;
int rc = 0; int rc, hd_status;
hd_status = 0;
rc = 0;
mutex_lock(&smp_cpu_state_mutex); mutex_lock(&smp_cpu_state_mutex);
if (MACHINE_HAS_TOPOLOGY) { if (MACHINE_HAS_TOPOLOGY) {
rc = 1; rc = 1;
...@@ -314,7 +322,11 @@ static int __arch_update_cpu_topology(void) ...@@ -314,7 +322,11 @@ static int __arch_update_cpu_topology(void)
update_cpu_masks(); update_cpu_masks();
if (!MACHINE_HAS_TOPOLOGY) if (!MACHINE_HAS_TOPOLOGY)
topology_update_polarization_simple(); topology_update_polarization_simple();
if (cpu_management == 1)
hd_status = hd_enable_hiperdispatch();
mutex_unlock(&smp_cpu_state_mutex); mutex_unlock(&smp_cpu_state_mutex);
if (hd_status == 0)
hd_disable_hiperdispatch();
return rc; return rc;
} }
...@@ -374,7 +386,24 @@ void topology_expect_change(void) ...@@ -374,7 +386,24 @@ void topology_expect_change(void)
set_topology_timer(); set_topology_timer();
} }
static int cpu_management; static int set_polarization(int polarization)
{
int rc = 0;
cpus_read_lock();
mutex_lock(&smp_cpu_state_mutex);
if (cpu_management == polarization)
goto out;
rc = topology_set_cpu_management(polarization);
if (rc)
goto out;
cpu_management = polarization;
topology_expect_change();
out:
mutex_unlock(&smp_cpu_state_mutex);
cpus_read_unlock();
return rc;
}
static ssize_t dispatching_show(struct device *dev, static ssize_t dispatching_show(struct device *dev,
struct device_attribute *attr, struct device_attribute *attr,
...@@ -400,19 +429,7 @@ static ssize_t dispatching_store(struct device *dev, ...@@ -400,19 +429,7 @@ static ssize_t dispatching_store(struct device *dev,
return -EINVAL; return -EINVAL;
if (val != 0 && val != 1) if (val != 0 && val != 1)
return -EINVAL; return -EINVAL;
rc = 0; rc = set_polarization(val);
cpus_read_lock();
mutex_lock(&smp_cpu_state_mutex);
if (cpu_management == val)
goto out;
rc = topology_set_cpu_management(val);
if (rc)
goto out;
cpu_management = val;
topology_expect_change();
out:
mutex_unlock(&smp_cpu_state_mutex);
cpus_read_unlock();
return rc ? rc : count; return rc ? rc : count;
} }
static DEVICE_ATTR_RW(dispatching); static DEVICE_ATTR_RW(dispatching);
...@@ -624,12 +641,37 @@ static int topology_ctl_handler(const struct ctl_table *ctl, int write, ...@@ -624,12 +641,37 @@ static int topology_ctl_handler(const struct ctl_table *ctl, int write,
return rc; return rc;
} }
static int polarization_ctl_handler(const struct ctl_table *ctl, int write,
void *buffer, size_t *lenp, loff_t *ppos)
{
int polarization;
int rc;
struct ctl_table ctl_entry = {
.procname = ctl->procname,
.data = &polarization,
.maxlen = sizeof(int),
.extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_ONE,
};
polarization = cpu_management;
rc = proc_douintvec_minmax(&ctl_entry, write, buffer, lenp, ppos);
if (rc < 0 || !write)
return rc;
return set_polarization(polarization);
}
static struct ctl_table topology_ctl_table[] = { static struct ctl_table topology_ctl_table[] = {
{ {
.procname = "topology", .procname = "topology",
.mode = 0644, .mode = 0644,
.proc_handler = topology_ctl_handler, .proc_handler = topology_ctl_handler,
}, },
{
.procname = "polarization",
.mode = 0644,
.proc_handler = polarization_ctl_handler,
},
}; };
static int __init topology_init(void) static int __init topology_init(void)
...@@ -642,6 +684,8 @@ static int __init topology_init(void) ...@@ -642,6 +684,8 @@ static int __init topology_init(void)
set_topology_timer(); set_topology_timer();
else else
topology_update_polarization_simple(); topology_update_polarization_simple();
if (IS_ENABLED(CONFIG_SCHED_TOPOLOGY_VERTICAL))
set_polarization(1);
register_sysctl("s390", topology_ctl_table); register_sysctl("s390", topology_ctl_table);
dev_root = bus_get_dev_root(&cpu_subsys); dev_root = bus_get_dev_root(&cpu_subsys);
......
// SPDX-License-Identifier: GPL-2.0
/*
* Support for warning track interruption
*
* Copyright IBM Corp. 2023
*/
#include <linux/cpu.h>
#include <linux/debugfs.h>
#include <linux/kallsyms.h>
#include <linux/smpboot.h>
#include <linux/irq.h>
#include <uapi/linux/sched/types.h>
#include <asm/debug.h>
#include <asm/diag.h>
#include <asm/sclp.h>
#define WTI_DBF_LEN 64
struct wti_debug {
unsigned long missed;
unsigned long addr;
pid_t pid;
};
struct wti_state {
/* debug data for s390dbf */
struct wti_debug dbg;
/*
* Represents the real-time thread responsible to
* acknowledge the warning-track interrupt and trigger
* preliminary and postliminary precautions.
*/
struct task_struct *thread;
/*
* If pending is true, the real-time thread must be scheduled.
* If not, a wake up of that thread will remain a noop.
*/
bool pending;
};
static DEFINE_PER_CPU(struct wti_state, wti_state);
static debug_info_t *wti_dbg;
/*
* During a warning-track grace period, interrupts are disabled
* to prevent delays of the warning-track acknowledgment.
*
* Once the CPU is physically dispatched again, interrupts are
* re-enabled.
*/
static void wti_irq_disable(void)
{
unsigned long flags;
struct ctlreg cr6;
local_irq_save(flags);
local_ctl_store(6, &cr6);
/* disable all I/O interrupts */
cr6.val &= ~0xff000000UL;
local_ctl_load(6, &cr6);
local_irq_restore(flags);
}
static void wti_irq_enable(void)
{
unsigned long flags;
struct ctlreg cr6;
local_irq_save(flags);
local_ctl_store(6, &cr6);
/* enable all I/O interrupts */
cr6.val |= 0xff000000UL;
local_ctl_load(6, &cr6);
local_irq_restore(flags);
}
static void store_debug_data(struct wti_state *st)
{
struct pt_regs *regs = get_irq_regs();
st->dbg.pid = current->pid;
st->dbg.addr = 0;
if (!user_mode(regs))
st->dbg.addr = regs->psw.addr;
}
static void wti_interrupt(struct ext_code ext_code,
unsigned int param32, unsigned long param64)
{
struct wti_state *st = this_cpu_ptr(&wti_state);
inc_irq_stat(IRQEXT_WTI);
wti_irq_disable();
store_debug_data(st);
st->pending = true;
wake_up_process(st->thread);
}
static int wti_pending(unsigned int cpu)
{
struct wti_state *st = per_cpu_ptr(&wti_state, cpu);
return st->pending;
}
static void wti_dbf_grace_period(struct wti_state *st)
{
struct wti_debug *wdi = &st->dbg;
char buf[WTI_DBF_LEN];
if (wdi->addr)
snprintf(buf, sizeof(buf), "%d %pS", wdi->pid, (void *)wdi->addr);
else
snprintf(buf, sizeof(buf), "%d <user>", wdi->pid);
debug_text_event(wti_dbg, 2, buf);
wdi->missed++;
}
static int wti_show(struct seq_file *seq, void *v)
{
struct wti_state *st;
int cpu;
cpus_read_lock();
seq_puts(seq, " ");
for_each_online_cpu(cpu)
seq_printf(seq, "CPU%-8d", cpu);
seq_putc(seq, '\n');
for_each_online_cpu(cpu) {
st = per_cpu_ptr(&wti_state, cpu);
seq_printf(seq, " %10lu", st->dbg.missed);
}
seq_putc(seq, '\n');
cpus_read_unlock();
return 0;
}
DEFINE_SHOW_ATTRIBUTE(wti);
static void wti_thread_fn(unsigned int cpu)
{
struct wti_state *st = per_cpu_ptr(&wti_state, cpu);
st->pending = false;
/*
* Yield CPU voluntarily to the hypervisor. Control
* resumes when hypervisor decides to dispatch CPU
* to this LPAR again.
*/
if (diag49c(DIAG49C_SUBC_ACK))
wti_dbf_grace_period(st);
wti_irq_enable();
}
static struct smp_hotplug_thread wti_threads = {
.store = &wti_state.thread,
.thread_should_run = wti_pending,
.thread_fn = wti_thread_fn,
.thread_comm = "cpuwti/%u",
.selfparking = false,
};
static int __init wti_init(void)
{
struct sched_param wti_sched_param = { .sched_priority = MAX_RT_PRIO - 1 };
struct dentry *wti_dir;
struct wti_state *st;
int cpu, rc;
rc = -EOPNOTSUPP;
if (!sclp.has_wti)
goto out;
rc = smpboot_register_percpu_thread(&wti_threads);
if (WARN_ON(rc))
goto out;
for_each_online_cpu(cpu) {
st = per_cpu_ptr(&wti_state, cpu);
sched_setscheduler(st->thread, SCHED_FIFO, &wti_sched_param);
}
rc = register_external_irq(EXT_IRQ_WARNING_TRACK, wti_interrupt);
if (rc) {
pr_warn("Couldn't request external interrupt 0x1007\n");
goto out_thread;
}
irq_subclass_register(IRQ_SUBCLASS_WARNING_TRACK);
rc = diag49c(DIAG49C_SUBC_REG);
if (rc) {
pr_warn("Failed to register warning track interrupt through DIAG 49C\n");
rc = -EOPNOTSUPP;
goto out_subclass;
}
wti_dir = debugfs_create_dir("wti", arch_debugfs_dir);
debugfs_create_file("stat", 0400, wti_dir, NULL, &wti_fops);
wti_dbg = debug_register("wti", 1, 1, WTI_DBF_LEN);
if (!wti_dbg) {
rc = -ENOMEM;
goto out_debug_register;
}
rc = debug_register_view(wti_dbg, &debug_hex_ascii_view);
if (rc)
goto out_debug_register;
goto out;
out_debug_register:
debug_unregister(wti_dbg);
out_subclass:
irq_subclass_unregister(IRQ_SUBCLASS_WARNING_TRACK);
unregister_external_irq(EXT_IRQ_WARNING_TRACK, wti_interrupt);
out_thread:
smpboot_unregister_percpu_thread(&wti_threads);
out:
return rc;
}
late_initcall(wti_init);
...@@ -95,11 +95,12 @@ static long cmm_alloc_pages(long nr, long *counter, ...@@ -95,11 +95,12 @@ static long cmm_alloc_pages(long nr, long *counter,
(*counter)++; (*counter)++;
spin_unlock(&cmm_lock); spin_unlock(&cmm_lock);
nr--; nr--;
cond_resched();
} }
return nr; return nr;
} }
static long cmm_free_pages(long nr, long *counter, struct cmm_page_array **list) static long __cmm_free_pages(long nr, long *counter, struct cmm_page_array **list)
{ {
struct cmm_page_array *pa; struct cmm_page_array *pa;
unsigned long addr; unsigned long addr;
...@@ -123,6 +124,21 @@ static long cmm_free_pages(long nr, long *counter, struct cmm_page_array **list) ...@@ -123,6 +124,21 @@ static long cmm_free_pages(long nr, long *counter, struct cmm_page_array **list)
return nr; return nr;
} }
static long cmm_free_pages(long nr, long *counter, struct cmm_page_array **list)
{
long inc = 0;
while (nr) {
inc = min(256L, nr);
nr -= inc;
inc = __cmm_free_pages(inc, counter, list);
if (inc)
break;
cond_resched();
}
return nr + inc;
}
static int cmm_oom_notify(struct notifier_block *self, static int cmm_oom_notify(struct notifier_block *self,
unsigned long dummy, void *parm) unsigned long dummy, void *parm)
{ {
......
This diff is collapsed.
...@@ -527,9 +527,9 @@ b938 sortl RRE_RR ...@@ -527,9 +527,9 @@ b938 sortl RRE_RR
b939 dfltcc RRF_R0RR2 b939 dfltcc RRF_R0RR2
b93a kdsa RRE_RR b93a kdsa RRE_RR
b93b nnpa RRE_00 b93b nnpa RRE_00
b93c ppno RRE_RR b93c prno RRE_RR
b93e kimd RRE_RR b93e kimd RRF_U0RR
b93f klmd RRE_RR b93f klmd RRF_U0RR
b941 cfdtr RRF_UURF b941 cfdtr RRF_UURF
b942 clgdtr RRF_UURF b942 clgdtr RRF_UURF
b943 clfdtr RRF_UURF b943 clfdtr RRF_UURF
...@@ -549,6 +549,10 @@ b964 nngrk RRF_R0RR2 ...@@ -549,6 +549,10 @@ b964 nngrk RRF_R0RR2
b965 ocgrk RRF_R0RR2 b965 ocgrk RRF_R0RR2
b966 nogrk RRF_R0RR2 b966 nogrk RRF_R0RR2
b967 nxgrk RRF_R0RR2 b967 nxgrk RRF_R0RR2
b968 clzg RRE_RR
b969 ctzg RRE_RR
b96c bextg RRF_R0RR2
b96d bdepg RRF_R0RR2
b972 crt RRF_U0RR b972 crt RRF_U0RR
b973 clrt RRF_U0RR b973 clrt RRF_U0RR
b974 nnrk RRF_R0RR2 b974 nnrk RRF_R0RR2
...@@ -796,6 +800,16 @@ e35b sy RXY_RRRD ...@@ -796,6 +800,16 @@ e35b sy RXY_RRRD
e35c mfy RXY_RRRD e35c mfy RXY_RRRD
e35e aly RXY_RRRD e35e aly RXY_RRRD
e35f sly RXY_RRRD e35f sly RXY_RRRD
e360 lxab RXY_RRRD
e361 llxab RXY_RRRD
e362 lxah RXY_RRRD
e363 llxah RXY_RRRD
e364 lxaf RXY_RRRD
e365 llxaf RXY_RRRD
e366 lxag RXY_RRRD
e367 llxag RXY_RRRD
e368 lxaq RXY_RRRD
e369 llxaq RXY_RRRD
e370 sthy RXY_RRRD e370 sthy RXY_RRRD
e371 lay RXY_RRRD e371 lay RXY_RRRD
e372 stcy RXY_RRRD e372 stcy RXY_RRRD
...@@ -880,6 +894,8 @@ e63c vupkz VSI_URDV ...@@ -880,6 +894,8 @@ e63c vupkz VSI_URDV
e63d vstrl VSI_URDV e63d vstrl VSI_URDV
e63f vstrlr VRS_RRDV e63f vstrlr VRS_RRDV
e649 vlip VRI_V0UU2 e649 vlip VRI_V0UU2
e64a vcvdq VRI_VV0UU
e64e vcvbq VRR_VV0U2
e650 vcvb VRR_RV0UU e650 vcvb VRR_RV0UU
e651 vclzdp VRR_VV0U2 e651 vclzdp VRR_VV0U2
e652 vcvbg VRR_RV0UU e652 vcvbg VRR_RV0UU
...@@ -893,7 +909,7 @@ e65b vpsop VRI_VVUUU2 ...@@ -893,7 +909,7 @@ e65b vpsop VRI_VVUUU2
e65c vupkzl VRR_VV0U2 e65c vupkzl VRR_VV0U2
e65d vcfn VRR_VV0UU2 e65d vcfn VRR_VV0UU2
e65e vclfnl VRR_VV0UU2 e65e vclfnl VRR_VV0UU2
e65f vtp VRR_0V e65f vtp VRR_0V0U
e670 vpkzr VRI_VVV0UU2 e670 vpkzr VRI_VVV0UU2
e671 vap VRI_VVV0UU2 e671 vap VRI_VVV0UU2
e672 vsrpr VRI_VVV0UU2 e672 vsrpr VRI_VVV0UU2
...@@ -908,6 +924,7 @@ e67b vrp VRI_VVV0UU2 ...@@ -908,6 +924,7 @@ e67b vrp VRI_VVV0UU2
e67c vscshp VRR_VVV e67c vscshp VRR_VVV
e67d vcsph VRR_VVV0U0 e67d vcsph VRR_VVV0U0
e67e vsdp VRI_VVV0UU2 e67e vsdp VRI_VVV0UU2
e67f vtz VRR_0VVU
e700 vleb VRX_VRRDU e700 vleb VRX_VRRDU
e701 vleh VRX_VRRDU e701 vleh VRX_VRRDU
e702 vleg VRX_VRRDU e702 vleg VRX_VRRDU
...@@ -948,6 +965,7 @@ e74d vrep VRI_VVUU ...@@ -948,6 +965,7 @@ e74d vrep VRI_VVUU
e750 vpopct VRR_VV0U e750 vpopct VRR_VV0U
e752 vctz VRR_VV0U e752 vctz VRR_VV0U
e753 vclz VRR_VV0U e753 vclz VRR_VV0U
e754 vgem VRR_VV0U
e756 vlr VRX_VV e756 vlr VRX_VV
e75c vistr VRR_VV0U0U e75c vistr VRR_VV0U0U
e75f vseg VRR_VV0U e75f vseg VRR_VV0U
...@@ -985,6 +1003,8 @@ e784 vpdi VRR_VVV0U ...@@ -985,6 +1003,8 @@ e784 vpdi VRR_VVV0U
e785 vbperm VRR_VVV e785 vbperm VRR_VVV
e786 vsld VRI_VVV0U e786 vsld VRI_VVV0U
e787 vsrd VRI_VVV0U e787 vsrd VRI_VVV0U
e788 veval VRI_VVV0UV
e789 vblend VRR_VVVU0V
e78a vstrc VRR_VVVUU0V e78a vstrc VRR_VVVUU0V
e78b vstrs VRR_VVVUU0V e78b vstrs VRR_VVVUU0V
e78c vperm VRR_VVV0V e78c vperm VRR_VVV0V
...@@ -1010,6 +1030,10 @@ e7ac vmale VRR_VVVU0V ...@@ -1010,6 +1030,10 @@ e7ac vmale VRR_VVVU0V
e7ad vmalo VRR_VVVU0V e7ad vmalo VRR_VVVU0V
e7ae vmae VRR_VVVU0V e7ae vmae VRR_VVVU0V
e7af vmao VRR_VVVU0V e7af vmao VRR_VVVU0V
e7b0 vdl VRR_VVV0UU
e7b1 vrl VRR_VVV0UU
e7b2 vd VRR_VVV0UU
e7b3 vr VRR_VVV0UU
e7b4 vgfm VRR_VVV0U e7b4 vgfm VRR_VVV0U
e7b8 vmsl VRR_VVVUU0V e7b8 vmsl VRR_VVVUU0V
e7b9 vaccc VRR_VVVU0V e7b9 vaccc VRR_VVVU0V
...@@ -1017,12 +1041,12 @@ e7bb vac VRR_VVVU0V ...@@ -1017,12 +1041,12 @@ e7bb vac VRR_VVVU0V
e7bc vgfma VRR_VVVU0V e7bc vgfma VRR_VVVU0V
e7bd vsbcbi VRR_VVVU0V e7bd vsbcbi VRR_VVVU0V
e7bf vsbi VRR_VVVU0V e7bf vsbi VRR_VVVU0V
e7c0 vclgd VRR_VV0UUU e7c0 vclfp VRR_VV0UUU
e7c1 vcdlg VRR_VV0UUU e7c1 vcfpl VRR_VV0UUU
e7c2 vcgd VRR_VV0UUU e7c2 vcsfp VRR_VV0UUU
e7c3 vcdg VRR_VV0UUU e7c3 vcfps VRR_VV0UUU
e7c4 vlde VRR_VV0UU2 e7c4 vfll VRR_VV0UU2
e7c5 vled VRR_VV0UUU e7c5 vflr VRR_VV0UUU
e7c7 vfi VRR_VV0UUU e7c7 vfi VRR_VV0UUU
e7ca wfk VRR_VV0UU2 e7ca wfk VRR_VV0UU2
e7cb wfc VRR_VV0UU2 e7cb wfc VRR_VV0UU2
...@@ -1094,9 +1118,9 @@ eb54 niy SIY_URD ...@@ -1094,9 +1118,9 @@ eb54 niy SIY_URD
eb55 cliy SIY_URD eb55 cliy SIY_URD
eb56 oiy SIY_URD eb56 oiy SIY_URD
eb57 xiy SIY_URD eb57 xiy SIY_URD
eb60 lric RSY_RDRU eb60 lric RSY_RURD2
eb61 stric RSY_RDRU eb61 stric RSY_RURD2
eb62 mric RSY_RDRU eb62 mric RSY_RURD2
eb6a asi SIY_IRD eb6a asi SIY_IRD
eb6e alsi SIY_IRD eb6e alsi SIY_IRD
eb71 lpswey SIY_RD eb71 lpswey SIY_RD
...@@ -1104,7 +1128,7 @@ eb7a agsi SIY_IRD ...@@ -1104,7 +1128,7 @@ eb7a agsi SIY_IRD
eb7e algsi SIY_IRD eb7e algsi SIY_IRD
eb80 icmh RSY_RURD eb80 icmh RSY_RURD
eb81 icmy RSY_RURD eb81 icmy RSY_RURD
eb8a sqbs RSY_RDRU eb8a sqbs RSY_RURD2
eb8e mvclu RSY_RRRD eb8e mvclu RSY_RRRD
eb8f clclu RSY_RRRD eb8f clclu RSY_RRRD
eb90 stmy RSY_RRRD eb90 stmy RSY_RRRD
......
...@@ -21,7 +21,7 @@ config CRYPTO_DEV_PADLOCK ...@@ -21,7 +21,7 @@ config CRYPTO_DEV_PADLOCK
(so called VIA PadLock ACE, Advanced Cryptography Engine) (so called VIA PadLock ACE, Advanced Cryptography Engine)
that provides instructions for very fast cryptographic that provides instructions for very fast cryptographic
operations with supported algorithms. operations with supported algorithms.
The instructions are used only when the CPU supports them. The instructions are used only when the CPU supports them.
Otherwise software encryption is used. Otherwise software encryption is used.
...@@ -78,18 +78,79 @@ config ZCRYPT ...@@ -78,18 +78,79 @@ config ZCRYPT
config PKEY config PKEY
tristate "Kernel API for protected key handling" tristate "Kernel API for protected key handling"
depends on S390 depends on S390
depends on ZCRYPT
help help
With this option enabled the pkey kernel module provides an API With this option enabled the pkey kernel modules provide an API
for creation and handling of protected keys. Other parts of the for creation and handling of protected keys. Other parts of the
kernel or userspace applications may use these functions. kernel or userspace applications may use these functions.
The protected key support is distributed into:
- A pkey base and API kernel module (pkey.ko) which offers the
infrastructure for the pkey handler kernel modules, the ioctl
and the sysfs API and the in-kernel API to the crypto cipher
implementations using protected key.
- A pkey pckmo kernel module (pkey-pckmo.ko) which is automatically
loaded when pckmo support (that is generation of protected keys
from clear key values) is available.
- A pkey CCA kernel module (pkey-cca.ko) which is automatically
loaded when a CEX crypto card is available.
- A pkey EP11 kernel module (pkey-ep11.ko) which is automatically
loaded when a CEX crypto card is available.
Select this option if you want to enable the kernel and userspace Select this option if you want to enable the kernel and userspace
API for proteced key handling. API for protected key handling.
config PKEY_CCA
tristate "PKEY CCA support handler"
depends on PKEY
depends on ZCRYPT
help
This is the CCA support handler for deriving protected keys
from CCA (secure) keys. Also this handler provides an alternate
way to make protected keys from clear key values.
The PKEY CCA support handler needs a Crypto Express card (CEX)
in CCA mode.
If you have selected the PKEY option then you should also enable
this option unless you are sure you never need to derive protected
keys from CCA key material.
config PKEY_EP11
tristate "PKEY EP11 support handler"
depends on PKEY
depends on ZCRYPT
help
This is the EP11 support handler for deriving protected keys
from EP11 (secure) keys. Also this handler provides an alternate
way to make protected keys from clear key values.
The PKEY EP11 support handler needs a Crypto Express card (CEX)
in EP11 mode.
If you have selected the PKEY option then you should also enable
this option unless you are sure you never need to derive protected
keys from EP11 key material.
config PKEY_PCKMO
tristate "PKEY PCKMO support handler"
depends on PKEY
help
This is the PCKMO support handler for deriving protected keys
from clear key values via invoking the PCKMO instruction.
The PCKMO instruction can be enabled and disabled in the crypto
settings at the LPAR profile. This handler checks for availability
during initialization and if build as a kernel module unloads
itself if PCKMO is disabled.
The PCKMO way of deriving protected keys from clear key material
is especially used during self test of protected key ciphers like
PAES but the CCA and EP11 handler provide alternate ways to
generate protected keys from clear key values.
Please note that creation of protected keys from secure keys If you have selected the PKEY option then you should also enable
requires to have at least one CEX card in coprocessor mode this option unless you are sure you never need to derive protected
available at runtime. keys from clear key values directly via PCKMO.
config CRYPTO_PAES_S390 config CRYPTO_PAES_S390
tristate "PAES cipher algorithms" tristate "PAES cipher algorithms"
......
...@@ -44,6 +44,7 @@ static void __init sclp_early_facilities_detect(void) ...@@ -44,6 +44,7 @@ static void __init sclp_early_facilities_detect(void)
sclp.has_ibs = !!(sccb->fac117 & 0x20); sclp.has_ibs = !!(sccb->fac117 & 0x20);
sclp.has_gisaf = !!(sccb->fac118 & 0x08); sclp.has_gisaf = !!(sccb->fac118 & 0x08);
sclp.has_hvs = !!(sccb->fac119 & 0x80); sclp.has_hvs = !!(sccb->fac119 & 0x80);
sclp.has_wti = !!(sccb->fac119 & 0x40);
sclp.has_kss = !!(sccb->fac98 & 0x01); sclp.has_kss = !!(sccb->fac98 & 0x01);
sclp.has_aisii = !!(sccb->fac118 & 0x40); sclp.has_aisii = !!(sccb->fac118 & 0x40);
sclp.has_aeni = !!(sccb->fac118 & 0x20); sclp.has_aeni = !!(sccb->fac118 & 0x20);
......
...@@ -13,10 +13,22 @@ obj-$(CONFIG_ZCRYPT) += zcrypt.o ...@@ -13,10 +13,22 @@ obj-$(CONFIG_ZCRYPT) += zcrypt.o
# adapter drivers depend on ap.o and zcrypt.o # adapter drivers depend on ap.o and zcrypt.o
obj-$(CONFIG_ZCRYPT) += zcrypt_cex4.o obj-$(CONFIG_ZCRYPT) += zcrypt_cex4.o
# pkey kernel module # pkey base and api module
pkey-objs := pkey_api.o pkey-objs := pkey_base.o pkey_api.o pkey_sysfs.o
obj-$(CONFIG_PKEY) += pkey.o obj-$(CONFIG_PKEY) += pkey.o
# pkey cca handler module
pkey-cca-objs := pkey_cca.o
obj-$(CONFIG_PKEY_CCA) += pkey-cca.o
# pkey ep11 handler module
pkey-ep11-objs := pkey_ep11.o
obj-$(CONFIG_PKEY_EP11) += pkey-ep11.o
# pkey pckmo handler module
pkey-pckmo-objs := pkey_pckmo.o
obj-$(CONFIG_PKEY_PCKMO) += pkey-pckmo.o
# adjunct processor matrix # adjunct processor matrix
vfio_ap-objs := vfio_ap_drv.o vfio_ap_ops.o vfio_ap-objs := vfio_ap_drv.o vfio_ap_ops.o
obj-$(CONFIG_VFIO_AP) += vfio_ap.o obj-$(CONFIG_VFIO_AP) += vfio_ap.o
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment