Commit 8f5759ae authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux into next

Pull first set of s390 updates from Martin Schwidefsky:
 "The biggest change in this patchset is conversion from the bootmem
  bitmaps to the memblock code.  This conversion requires two common
  code patches to introduce the 'physmem' memblock list.

  We experimented with ticket spinlocks but in the end decided against
  them as they perform poorly on virtualized systems.  But the spinlock
  cleanup and some small improvements are included.

  The uaccess code got another optimization, the get_user/put_user calls
  are now inline again for kernel compiles targeted at z10 or newer
  machines.  This makes the text segment shorter and the code gets a
  little bit faster.

  And as always some bug fixes"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (31 commits)
  s390/lowcore: replace lowcore irb array with a per-cpu variable
  s390/lowcore: reserve 96 bytes for IRB in lowcore
  s390/facilities: remove extract-cpu-time facility check
  s390: require mvcos facility for z10 and newer machines
  s390/boot: fix boot of compressed kernel built with gcc 4.9
  s390/cio: remove weird assignment during argument evaluation
  s390/time: cast tv_nsec to u64 prior to shift in update_vsyscall
  s390/oprofile: make return of 0 explicit
  s390/spinlock: refactor arch_spin_lock_wait[_flags]
  s390/rwlock: add missing local_irq_restore calls
  s390/spinlock,rwlock: always to a load-and-test first
  s390/cio: fix multiple structure definitions
  s390/spinlock: fix system hang with spin_retry <= 0
  s390/appldata: add slab.h for kzalloc/kfree
  s390/uaccess: provide inline variants of get_user/put_user
  s390/pci: add some new arch specific pci attributes
  s390/pci: use pdev->dev.groups for attribute creation
  s390/pci: use macro for attribute creation
  s390/pci: improve state check when processing hotplug events
  s390: split TIF bits into CIF, PIF and TIF bits
  ...
parents e5c4ecdc 63aef00b
s390 SCSI dump tool (zfcpdump) The s390 SCSI dump tool (zfcpdump)
System z machines (z900 or higher) provide hardware support for creating system System z machines (z900 or higher) provide hardware support for creating system
dumps on SCSI disks. The dump process is initiated by booting a dump tool, which dumps on SCSI disks. The dump process is initiated by booting a dump tool, which
has to create a dump of the current (probably crashed) Linux image. In order to has to create a dump of the current (probably crashed) Linux image. In order to
not overwrite memory of the crashed Linux with data of the dump tool, the not overwrite memory of the crashed Linux with data of the dump tool, the
hardware saves some memory plus the register sets of the boot cpu before the hardware saves some memory plus the register sets of the boot CPU before the
dump tool is loaded. There exists an SCLP hardware interface to obtain the saved dump tool is loaded. There exists an SCLP hardware interface to obtain the saved
memory afterwards. Currently 32 MB are saved. memory afterwards. Currently 32 MB are saved.
This zfcpdump implementation consists of a Linux dump kernel together with This zfcpdump implementation consists of a Linux dump kernel together with
a userspace dump tool, which are loaded together into the saved memory region a user space dump tool, which are loaded together into the saved memory region
below 32 MB. zfcpdump is installed on a SCSI disk using zipl (as contained in below 32 MB. zfcpdump is installed on a SCSI disk using zipl (as contained in
the s390-tools package) to make the device bootable. The operator of a Linux the s390-tools package) to make the device bootable. The operator of a Linux
system can then trigger a SCSI dump by booting the SCSI disk, where zfcpdump system can then trigger a SCSI dump by booting the SCSI disk, where zfcpdump
...@@ -19,68 +19,33 @@ The kernel part of zfcpdump is implemented as a debugfs file under "zcore/mem", ...@@ -19,68 +19,33 @@ The kernel part of zfcpdump is implemented as a debugfs file under "zcore/mem",
which exports memory and registers of the crashed Linux in an s390 which exports memory and registers of the crashed Linux in an s390
standalone dump format. It can be used in the same way as e.g. /dev/mem. The standalone dump format. It can be used in the same way as e.g. /dev/mem. The
dump format defines a 4K header followed by plain uncompressed memory. The dump format defines a 4K header followed by plain uncompressed memory. The
register sets are stored in the prefix pages of the respective cpus. To build a register sets are stored in the prefix pages of the respective CPUs. To build a
dump enabled kernel with the zcore driver, the kernel config option dump enabled kernel with the zcore driver, the kernel config option
CONFIG_ZFCPDUMP has to be set. When reading from "zcore/mem", the part of CONFIG_CRASH_DUMP has to be set. When reading from "zcore/mem", the part of
memory, which has been saved by hardware is read by the driver via the SCLP memory, which has been saved by hardware is read by the driver via the SCLP
hardware interface. The second part is just copied from the non overwritten real hardware interface. The second part is just copied from the non overwritten real
memory. memory.
The userspace application of zfcpdump can reside e.g. in an intitramfs or an Since kernel version 3.12 also the /proc/vmcore file can also be used to access
initrd. It reads from zcore/mem and writes the system dump to a file on a the dump.
SCSI disk.
To build a zfcpdump kernel use the following settings in your kernel To get a valid zfcpdump kernel configuration use "make zfcpdump_defconfig".
configuration:
* CONFIG_ZFCPDUMP=y
* Enable ZFCP driver
* Enable SCSI driver
* Enable ext2 and ext3 filesystems
* Disable as many features as possible to keep the kernel small.
E.g. network support is not needed at all.
To use the zfcpdump userspace application in an initramfs you have to do the The s390 zipl tool looks for the zfcpdump kernel and optional initrd/initramfs
following: under the following locations:
* Copy the zfcpdump executable somewhere into your Linux tree. * kernel: <zfcpdump directory>/zfcpdump.image
E.g. to "arch/s390/boot/zfcpdump. If you do not want to include * ramdisk: <zfcpdump directory>/zfcpdump.rd
shared libraries, compile the tool with the "-static" gcc option.
* If you want to include e2fsck, add it to your source tree, too. The zfcpdump
application attempts to start /sbin/e2fsck from the ramdisk.
* Use an initramfs config file like the following:
dir /dev 755 0 0 The zfcpdump directory is defined in the s390-tools package.
nod /dev/console 644 0 0 c 5 1
nod /dev/null 644 0 0 c 1 3
nod /dev/sda1 644 0 0 b 8 1
nod /dev/sda2 644 0 0 b 8 2
nod /dev/sda3 644 0 0 b 8 3
nod /dev/sda4 644 0 0 b 8 4
nod /dev/sda5 644 0 0 b 8 5
nod /dev/sda6 644 0 0 b 8 6
nod /dev/sda7 644 0 0 b 8 7
nod /dev/sda8 644 0 0 b 8 8
nod /dev/sda9 644 0 0 b 8 9
nod /dev/sda10 644 0 0 b 8 10
nod /dev/sda11 644 0 0 b 8 11
nod /dev/sda12 644 0 0 b 8 12
nod /dev/sda13 644 0 0 b 8 13
nod /dev/sda14 644 0 0 b 8 14
nod /dev/sda15 644 0 0 b 8 15
file /init arch/s390/boot/zfcpdump 755 0 0
file /sbin/e2fsck arch/s390/boot/e2fsck 755 0 0
dir /proc 755 0 0
dir /sys 755 0 0
dir /mnt 755 0 0
dir /sbin 755 0 0
* Issue "make image" to build the zfcpdump image with initramfs. The user space application of zfcpdump can reside in an intitramfs or an
initrd. It can also be included in a built-in kernel initramfs. The application
reads from /proc/vmcore or zcore/mem and writes the system dump to a SCSI disk.
In a Linux distribution the zfcpdump enabled kernel image must be copied to The s390-tools package version 1.24.0 and above builds an external zfcpdump
/usr/share/zfcpdump/zfcpdump.image, where the s390 zipl tool is looking for the initramfs with a user space application that writes the dump to a SCSI
dump kernel when preparing a SCSI dump disk. partition.
If you use a ramdisk copy it to "/usr/share/zfcpdump/zfcpdump.rd".
For more information on how to use zfcpdump refer to the s390 'Using the Dump For more information on how to use zfcpdump refer to the s390 'Using the Dump
Tools book', which is available from Tools book', which is available from
......
...@@ -60,7 +60,6 @@ config PCI_QUIRKS ...@@ -60,7 +60,6 @@ config PCI_QUIRKS
config S390 config S390
def_bool y def_bool y
select ARCH_DISCARD_MEMBLOCK
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS select ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS
select ARCH_HAVE_NMI_SAFE_CMPXCHG select ARCH_HAVE_NMI_SAFE_CMPXCHG
...@@ -130,6 +129,7 @@ config S390 ...@@ -130,6 +129,7 @@ config S390
select HAVE_KVM if 64BIT select HAVE_KVM if 64BIT
select HAVE_MEMBLOCK select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP select HAVE_MEMBLOCK_NODE_MAP
select HAVE_MEMBLOCK_PHYS_MAP
select HAVE_MOD_ARCH_SPECIFIC select HAVE_MOD_ARCH_SPECIFIC
select HAVE_OPROFILE select HAVE_OPROFILE
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
...@@ -139,6 +139,7 @@ config S390 ...@@ -139,6 +139,7 @@ config S390
select HAVE_VIRT_CPU_ACCOUNTING select HAVE_VIRT_CPU_ACCOUNTING
select KTIME_SCALAR if 32BIT select KTIME_SCALAR if 32BIT
select MODULES_USE_ELF_RELA select MODULES_USE_ELF_RELA
select NO_BOOTMEM
select OLD_SIGACTION select OLD_SIGACTION
select OLD_SIGSUSPEND3 select OLD_SIGSUSPEND3
select SYSCTL_EXCEPTION_TRACE select SYSCTL_EXCEPTION_TRACE
...@@ -592,21 +593,14 @@ config CRASH_DUMP ...@@ -592,21 +593,14 @@ config CRASH_DUMP
bool "kernel crash dumps" bool "kernel crash dumps"
depends on 64BIT && SMP depends on 64BIT && SMP
select KEXEC select KEXEC
select ZFCPDUMP
help help
Generate crash dump after being started by kexec. Generate crash dump after being started by kexec.
Crash dump kernels are loaded in the main kernel with kexec-tools Crash dump kernels are loaded in the main kernel with kexec-tools
into a specially reserved region and then later executed after into a specially reserved region and then later executed after
a crash by kdump/kexec. a crash by kdump/kexec.
For more details see Documentation/kdump/kdump.txt
config ZFCPDUMP
def_bool n
prompt "zfcpdump support"
depends on 64BIT && SMP
help
Select this option if you want to build an zfcpdump enabled kernel.
Refer to <file:Documentation/s390/zfcpdump.txt> for more details on this. Refer to <file:Documentation/s390/zfcpdump.txt> for more details on this.
This option also enables s390 zfcpdump.
See also <file:Documentation/s390/zfcpdump.txt>
endmenu endmenu
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#include <linux/kernel_stat.h> #include <linux/kernel_stat.h>
#include <linux/pagemap.h> #include <linux/pagemap.h>
#include <linux/swap.h> #include <linux/swap.h>
#include <linux/slab.h>
#include <asm/io.h> #include <asm/io.h>
#include "appldata.h" #include "appldata.h"
......
...@@ -12,7 +12,7 @@ targets += misc.o piggy.o sizes.h head$(BITS).o ...@@ -12,7 +12,7 @@ targets += misc.o piggy.o sizes.h head$(BITS).o
KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2 KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2
KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
KBUILD_CFLAGS += $(cflags-y) KBUILD_CFLAGS += $(cflags-y) -fno-delete-null-pointer-checks
KBUILD_CFLAGS += $(call cc-option,-mpacked-stack) KBUILD_CFLAGS += $(call cc-option,-mpacked-stack)
KBUILD_CFLAGS += $(call cc-option,-ffreestanding) KBUILD_CFLAGS += $(call cc-option,-ffreestanding)
......
...@@ -229,5 +229,5 @@ int ccw_device_siosl(struct ccw_device *); ...@@ -229,5 +229,5 @@ int ccw_device_siosl(struct ccw_device *);
extern void ccw_device_get_schid(struct ccw_device *, struct subchannel_id *); extern void ccw_device_get_schid(struct ccw_device *, struct subchannel_id *);
extern void *ccw_device_get_chp_desc(struct ccw_device *, int); struct channel_path_desc *ccw_device_get_chp_desc(struct ccw_device *, int);
#endif /* _S390_CCWDEV_H_ */ #endif /* _S390_CCWDEV_H_ */
...@@ -10,6 +10,8 @@ struct ccw_driver; ...@@ -10,6 +10,8 @@ struct ccw_driver;
* @count: number of attached slave devices * @count: number of attached slave devices
* @dev: embedded device structure * @dev: embedded device structure
* @cdev: variable number of slave devices, allocated as needed * @cdev: variable number of slave devices, allocated as needed
* @ungroup_work: work to be done when a ccwgroup notifier has action
* type %BUS_NOTIFY_UNBIND_DRIVER
*/ */
struct ccwgroup_device { struct ccwgroup_device {
enum { enum {
......
...@@ -8,6 +8,17 @@ ...@@ -8,6 +8,17 @@
#include <uapi/asm/chpid.h> #include <uapi/asm/chpid.h>
#include <asm/cio.h> #include <asm/cio.h>
struct channel_path_desc {
u8 flags;
u8 lsn;
u8 desc;
u8 chpid;
u8 swla;
u8 zeroes;
u8 chla;
u8 chpp;
} __packed;
static inline void chp_id_init(struct chp_id *chpid) static inline void chp_id_init(struct chp_id *chpid)
{ {
memset(chpid, 0, sizeof(struct chp_id)); memset(chpid, 0, sizeof(struct chp_id));
......
...@@ -29,7 +29,7 @@ static inline int futex_atomic_op_inuser(int encoded_op, u32 __user *uaddr) ...@@ -29,7 +29,7 @@ static inline int futex_atomic_op_inuser(int encoded_op, u32 __user *uaddr)
int cmparg = (encoded_op << 20) >> 20; int cmparg = (encoded_op << 20) >> 20;
int oldval = 0, newval, ret; int oldval = 0, newval, ret;
update_primary_asce(current); load_kernel_asce();
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg; oparg = 1 << oparg;
...@@ -79,7 +79,7 @@ static inline int futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, ...@@ -79,7 +79,7 @@ static inline int futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
{ {
int ret; int ret;
update_primary_asce(current); load_kernel_asce();
asm volatile( asm volatile(
" sacf 256\n" " sacf 256\n"
"0: cs %1,%4,0(%5)\n" "0: cs %1,%4,0(%5)\n"
......
...@@ -93,7 +93,9 @@ struct _lowcore { ...@@ -93,7 +93,9 @@ struct _lowcore {
__u32 save_area_sync[8]; /* 0x0200 */ __u32 save_area_sync[8]; /* 0x0200 */
__u32 save_area_async[8]; /* 0x0220 */ __u32 save_area_async[8]; /* 0x0220 */
__u32 save_area_restart[1]; /* 0x0240 */ __u32 save_area_restart[1]; /* 0x0240 */
__u8 pad_0x0244[0x0248-0x0244]; /* 0x0244 */
/* CPU flags. */
__u32 cpu_flags; /* 0x0244 */
/* Return psws. */ /* Return psws. */
psw_t return_psw; /* 0x0248 */ psw_t return_psw; /* 0x0248 */
...@@ -139,12 +141,9 @@ struct _lowcore { ...@@ -139,12 +141,9 @@ struct _lowcore {
__u32 percpu_offset; /* 0x02f0 */ __u32 percpu_offset; /* 0x02f0 */
__u32 machine_flags; /* 0x02f4 */ __u32 machine_flags; /* 0x02f4 */
__u32 ftrace_func; /* 0x02f8 */ __u32 ftrace_func; /* 0x02f8 */
__u8 pad_0x02fc[0x0300-0x02fc]; /* 0x02fc */ __u32 spinlock_lockval; /* 0x02fc */
/* Interrupt response block */
__u8 irb[64]; /* 0x0300 */
__u8 pad_0x0340[0x0e00-0x0340]; /* 0x0340 */ __u8 pad_0x0300[0x0e00-0x0300]; /* 0x0300 */
/* /*
* 0xe00 contains the address of the IPL Parameter Information * 0xe00 contains the address of the IPL Parameter Information
...@@ -237,7 +236,9 @@ struct _lowcore { ...@@ -237,7 +236,9 @@ struct _lowcore {
__u64 save_area_sync[8]; /* 0x0200 */ __u64 save_area_sync[8]; /* 0x0200 */
__u64 save_area_async[8]; /* 0x0240 */ __u64 save_area_async[8]; /* 0x0240 */
__u64 save_area_restart[1]; /* 0x0280 */ __u64 save_area_restart[1]; /* 0x0280 */
__u8 pad_0x0288[0x0290-0x0288]; /* 0x0288 */
/* CPU flags. */
__u64 cpu_flags; /* 0x0288 */
/* Return psws. */ /* Return psws. */
psw_t return_psw; /* 0x0290 */ psw_t return_psw; /* 0x0290 */
...@@ -285,15 +286,13 @@ struct _lowcore { ...@@ -285,15 +286,13 @@ struct _lowcore {
__u64 machine_flags; /* 0x0388 */ __u64 machine_flags; /* 0x0388 */
__u64 ftrace_func; /* 0x0390 */ __u64 ftrace_func; /* 0x0390 */
__u64 gmap; /* 0x0398 */ __u64 gmap; /* 0x0398 */
__u8 pad_0x03a0[0x0400-0x03a0]; /* 0x03a0 */ __u32 spinlock_lockval; /* 0x03a0 */
__u8 pad_0x03a0[0x0400-0x03a4]; /* 0x03a4 */
/* Interrupt response block. */
__u8 irb[64]; /* 0x0400 */
/* Per cpu primary space access list */ /* Per cpu primary space access list */
__u32 paste[16]; /* 0x0440 */ __u32 paste[16]; /* 0x0400 */
__u8 pad_0x0480[0x0e00-0x0480]; /* 0x0480 */ __u8 pad_0x04c0[0x0e00-0x0440]; /* 0x0440 */
/* /*
* 0xe00 contains the address of the IPL Parameter Information * 0xe00 contains the address of the IPL Parameter Information
......
...@@ -30,33 +30,31 @@ static inline int init_new_context(struct task_struct *tsk, ...@@ -30,33 +30,31 @@ static inline int init_new_context(struct task_struct *tsk,
#define destroy_context(mm) do { } while (0) #define destroy_context(mm) do { } while (0)
static inline void update_user_asce(struct mm_struct *mm, int load_primary) static inline void set_user_asce(struct mm_struct *mm)
{ {
pgd_t *pgd = mm->pgd; pgd_t *pgd = mm->pgd;
S390_lowcore.user_asce = mm->context.asce_bits | __pa(pgd); S390_lowcore.user_asce = mm->context.asce_bits | __pa(pgd);
if (load_primary)
__ctl_load(S390_lowcore.user_asce, 1, 1);
set_fs(current->thread.mm_segment); set_fs(current->thread.mm_segment);
set_cpu_flag(CIF_ASCE);
} }
static inline void clear_user_asce(struct mm_struct *mm, int load_primary) static inline void clear_user_asce(void)
{ {
S390_lowcore.user_asce = S390_lowcore.kernel_asce; S390_lowcore.user_asce = S390_lowcore.kernel_asce;
if (load_primary) __ctl_load(S390_lowcore.user_asce, 1, 1);
__ctl_load(S390_lowcore.user_asce, 1, 1);
__ctl_load(S390_lowcore.user_asce, 7, 7); __ctl_load(S390_lowcore.user_asce, 7, 7);
} }
static inline void update_primary_asce(struct task_struct *tsk) static inline void load_kernel_asce(void)
{ {
unsigned long asce; unsigned long asce;
__ctl_store(asce, 1, 1); __ctl_store(asce, 1, 1);
if (asce != S390_lowcore.kernel_asce) if (asce != S390_lowcore.kernel_asce)
__ctl_load(S390_lowcore.kernel_asce, 1, 1); __ctl_load(S390_lowcore.kernel_asce, 1, 1);
set_tsk_thread_flag(tsk, TIF_ASCE); set_cpu_flag(CIF_ASCE);
} }
static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
...@@ -64,25 +62,17 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, ...@@ -64,25 +62,17 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
{ {
int cpu = smp_processor_id(); int cpu = smp_processor_id();
update_primary_asce(tsk);
if (prev == next) if (prev == next)
return; return;
if (MACHINE_HAS_TLB_LC) if (MACHINE_HAS_TLB_LC)
cpumask_set_cpu(cpu, &next->context.cpu_attach_mask); cpumask_set_cpu(cpu, &next->context.cpu_attach_mask);
if (atomic_inc_return(&next->context.attach_count) >> 16) { /* Clear old ASCE by loading the kernel ASCE. */
/* Delay update_user_asce until all TLB flushes are done. */ __ctl_load(S390_lowcore.kernel_asce, 1, 1);
set_tsk_thread_flag(tsk, TIF_TLB_WAIT); __ctl_load(S390_lowcore.kernel_asce, 7, 7);
/* Clear old ASCE by loading the kernel ASCE. */ /* Delay loading of the new ASCE to control registers CR1 & CR7 */
clear_user_asce(next, 0); set_cpu_flag(CIF_ASCE);
} else { atomic_inc(&next->context.attach_count);
cpumask_set_cpu(cpu, mm_cpumask(next));
update_user_asce(next, 0);
if (next->context.flush_mm)
/* Flush pending TLBs */
__tlb_flush_mm(next);
}
atomic_dec(&prev->context.attach_count); atomic_dec(&prev->context.attach_count);
WARN_ON(atomic_read(&prev->context.attach_count) < 0);
if (MACHINE_HAS_TLB_LC) if (MACHINE_HAS_TLB_LC)
cpumask_clear_cpu(cpu, &prev->context.cpu_attach_mask); cpumask_clear_cpu(cpu, &prev->context.cpu_attach_mask);
} }
...@@ -93,15 +83,14 @@ static inline void finish_arch_post_lock_switch(void) ...@@ -93,15 +83,14 @@ static inline void finish_arch_post_lock_switch(void)
struct task_struct *tsk = current; struct task_struct *tsk = current;
struct mm_struct *mm = tsk->mm; struct mm_struct *mm = tsk->mm;
if (!test_tsk_thread_flag(tsk, TIF_TLB_WAIT)) if (!mm)
return; return;
preempt_disable(); preempt_disable();
clear_tsk_thread_flag(tsk, TIF_TLB_WAIT);
while (atomic_read(&mm->context.attach_count) >> 16) while (atomic_read(&mm->context.attach_count) >> 16)
cpu_relax(); cpu_relax();
cpumask_set_cpu(smp_processor_id(), mm_cpumask(mm)); cpumask_set_cpu(smp_processor_id(), mm_cpumask(mm));
update_user_asce(mm, 0); set_user_asce(mm);
if (mm->context.flush_mm) if (mm->context.flush_mm)
__tlb_flush_mm(mm); __tlb_flush_mm(mm);
preempt_enable(); preempt_enable();
...@@ -113,7 +102,9 @@ static inline void finish_arch_post_lock_switch(void) ...@@ -113,7 +102,9 @@ static inline void finish_arch_post_lock_switch(void)
static inline void activate_mm(struct mm_struct *prev, static inline void activate_mm(struct mm_struct *prev,
struct mm_struct *next) struct mm_struct *next)
{ {
switch_mm(prev, next, current); switch_mm(prev, next, current);
cpumask_set_cpu(smp_processor_id(), mm_cpumask(next));
set_user_asce(next);
} }
static inline void arch_dup_mmap(struct mm_struct *oldmm, static inline void arch_dup_mmap(struct mm_struct *oldmm,
......
...@@ -78,10 +78,16 @@ struct zpci_dev { ...@@ -78,10 +78,16 @@ struct zpci_dev {
enum zpci_state state; enum zpci_state state;
u32 fid; /* function ID, used by sclp */ u32 fid; /* function ID, used by sclp */
u32 fh; /* function handle, used by insn's */ u32 fh; /* function handle, used by insn's */
u16 vfn; /* virtual function number */
u16 pchid; /* physical channel ID */ u16 pchid; /* physical channel ID */
u8 pfgid; /* function group ID */ u8 pfgid; /* function group ID */
u8 pft; /* pci function type */
u16 domain; u16 domain;
u8 pfip[CLP_PFIP_NR_SEGMENTS]; /* pci function internal path */
u32 uid; /* user defined id */
u8 util_str[CLP_UTIL_STR_LEN]; /* utility string */
/* IRQ stuff */ /* IRQ stuff */
u64 msi_addr; /* MSI address */ u64 msi_addr; /* MSI address */
struct airq_iv *aibv; /* adapter interrupt bit vector */ struct airq_iv *aibv; /* adapter interrupt bit vector */
......
...@@ -44,6 +44,7 @@ struct clp_fh_list_entry { ...@@ -44,6 +44,7 @@ struct clp_fh_list_entry {
#define CLP_SET_DISABLE_PCI_FN 1 /* Yes, 1 disables it */ #define CLP_SET_DISABLE_PCI_FN 1 /* Yes, 1 disables it */
#define CLP_UTIL_STR_LEN 64 #define CLP_UTIL_STR_LEN 64
#define CLP_PFIP_NR_SEGMENTS 4
/* List PCI functions request */ /* List PCI functions request */
struct clp_req_list_pci { struct clp_req_list_pci {
...@@ -85,7 +86,7 @@ struct clp_rsp_query_pci { ...@@ -85,7 +86,7 @@ struct clp_rsp_query_pci {
struct clp_rsp_hdr hdr; struct clp_rsp_hdr hdr;
u32 fmt : 4; /* cmd request block format */ u32 fmt : 4; /* cmd request block format */
u32 : 28; u32 : 28;
u64 reserved1; u64 : 64;
u16 vfn; /* virtual fn number */ u16 vfn; /* virtual fn number */
u16 : 7; u16 : 7;
u16 util_str_avail : 1; /* utility string available? */ u16 util_str_avail : 1; /* utility string available? */
...@@ -94,10 +95,13 @@ struct clp_rsp_query_pci { ...@@ -94,10 +95,13 @@ struct clp_rsp_query_pci {
u8 bar_size[PCI_BAR_COUNT]; u8 bar_size[PCI_BAR_COUNT];
u16 pchid; u16 pchid;
u32 bar[PCI_BAR_COUNT]; u32 bar[PCI_BAR_COUNT];
u64 reserved2; u8 pfip[CLP_PFIP_NR_SEGMENTS]; /* pci function internal path */
u32 : 24;
u8 pft; /* pci function type */
u64 sdma; /* start dma as */ u64 sdma; /* start dma as */
u64 edma; /* end dma as */ u64 edma; /* end dma as */
u64 reserved3[6]; u32 reserved[11];
u32 uid; /* user defined id */
u8 util_str[CLP_UTIL_STR_LEN]; /* utility string */ u8 util_str[CLP_UTIL_STR_LEN]; /* utility string */
} __packed; } __packed;
......
...@@ -11,6 +11,13 @@ ...@@ -11,6 +11,13 @@
#ifndef __ASM_S390_PROCESSOR_H #ifndef __ASM_S390_PROCESSOR_H
#define __ASM_S390_PROCESSOR_H #define __ASM_S390_PROCESSOR_H
#define CIF_MCCK_PENDING 0 /* machine check handling is pending */
#define CIF_ASCE 1 /* user asce needs fixup / uaccess */
#define _CIF_MCCK_PENDING (1<<CIF_MCCK_PENDING)
#define _CIF_ASCE (1<<CIF_ASCE)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/linkage.h> #include <linux/linkage.h>
...@@ -21,6 +28,21 @@ ...@@ -21,6 +28,21 @@
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/runtime_instr.h> #include <asm/runtime_instr.h>
static inline void set_cpu_flag(int flag)
{
S390_lowcore.cpu_flags |= (1U << flag);
}
static inline void clear_cpu_flag(int flag)
{
S390_lowcore.cpu_flags &= ~(1U << flag);
}
static inline int test_cpu_flag(int flag)
{
return !!(S390_lowcore.cpu_flags & (1U << flag));
}
/* /*
* Default implementation of macro that returns current * Default implementation of macro that returns current
* instruction pointer ("program counter"). * instruction pointer ("program counter").
......
...@@ -8,6 +8,12 @@ ...@@ -8,6 +8,12 @@
#include <uapi/asm/ptrace.h> #include <uapi/asm/ptrace.h>
#define PIF_SYSCALL 0 /* inside a system call */
#define PIF_PER_TRAP 1 /* deliver sigtrap on return to user */
#define _PIF_SYSCALL (1<<PIF_SYSCALL)
#define _PIF_PER_TRAP (1<<PIF_PER_TRAP)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#define PSW_KERNEL_BITS (PSW_DEFAULT_KEY | PSW_MASK_BASE | PSW_ASC_HOME | \ #define PSW_KERNEL_BITS (PSW_DEFAULT_KEY | PSW_MASK_BASE | PSW_ASC_HOME | \
...@@ -29,6 +35,7 @@ struct pt_regs ...@@ -29,6 +35,7 @@ struct pt_regs
unsigned int int_code; unsigned int int_code;
unsigned int int_parm; unsigned int int_parm;
unsigned long int_parm_long; unsigned long int_parm_long;
unsigned long flags;
}; };
/* /*
...@@ -79,6 +86,21 @@ struct per_struct_kernel { ...@@ -79,6 +86,21 @@ struct per_struct_kernel {
#define PER_CONTROL_SUSPENSION 0x00400000UL #define PER_CONTROL_SUSPENSION 0x00400000UL
#define PER_CONTROL_ALTERATION 0x00200000UL #define PER_CONTROL_ALTERATION 0x00200000UL
static inline void set_pt_regs_flag(struct pt_regs *regs, int flag)
{
regs->flags |= (1U << flag);
}
static inline void clear_pt_regs_flag(struct pt_regs *regs, int flag)
{
regs->flags &= ~(1U << flag);
}
static inline int test_pt_regs_flag(struct pt_regs *regs, int flag)
{
return !!(regs->flags & (1U << flag));
}
/* /*
* These are defined as per linux/ptrace.h, which see. * These are defined as per linux/ptrace.h, which see.
*/ */
......
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
#define PARMAREA 0x10400 #define PARMAREA 0x10400
#define MEMORY_CHUNKS 256
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
...@@ -31,22 +30,11 @@ ...@@ -31,22 +30,11 @@
#endif /* CONFIG_64BIT */ #endif /* CONFIG_64BIT */
#define COMMAND_LINE ((char *) (0x10480)) #define COMMAND_LINE ((char *) (0x10480))
#define CHUNK_READ_WRITE 0
#define CHUNK_READ_ONLY 1
struct mem_chunk {
unsigned long addr;
unsigned long size;
int type;
};
extern struct mem_chunk memory_chunk[];
extern int memory_end_set; extern int memory_end_set;
extern unsigned long memory_end; extern unsigned long memory_end;
extern unsigned long max_physmem_end;
void detect_memory_layout(struct mem_chunk chunk[], unsigned long maxsize); extern void detect_memory_memblock(void);
void create_mem_hole(struct mem_chunk mem_chunk[], unsigned long addr,
unsigned long size);
/* /*
* Machine features detected in head.S * Machine features detected in head.S
......
...@@ -30,7 +30,6 @@ extern int smp_store_status(int cpu); ...@@ -30,7 +30,6 @@ extern int smp_store_status(int cpu);
extern int smp_vcpu_scheduled(int cpu); extern int smp_vcpu_scheduled(int cpu);
extern void smp_yield_cpu(int cpu); extern void smp_yield_cpu(int cpu);
extern void smp_yield(void); extern void smp_yield(void);
extern void smp_stop_cpu(void);
extern void smp_cpu_set_polarization(int cpu, int val); extern void smp_cpu_set_polarization(int cpu, int val);
extern int smp_cpu_get_polarization(int cpu); extern int smp_cpu_get_polarization(int cpu);
extern void smp_fill_possible_mask(void); extern void smp_fill_possible_mask(void);
...@@ -54,6 +53,8 @@ static inline void smp_yield_cpu(int cpu) { } ...@@ -54,6 +53,8 @@ static inline void smp_yield_cpu(int cpu) { }
static inline void smp_yield(void) { } static inline void smp_yield(void) { }
static inline void smp_fill_possible_mask(void) { } static inline void smp_fill_possible_mask(void) { }
#endif /* CONFIG_SMP */
static inline void smp_stop_cpu(void) static inline void smp_stop_cpu(void)
{ {
u16 pcpu = stap(); u16 pcpu = stap();
...@@ -64,8 +65,6 @@ static inline void smp_stop_cpu(void) ...@@ -64,8 +65,6 @@ static inline void smp_stop_cpu(void)
} }
} }
#endif /* CONFIG_SMP */
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
extern int smp_rescan_cpus(void); extern int smp_rescan_cpus(void);
extern void __noreturn cpu_die(void); extern void __noreturn cpu_die(void);
......
...@@ -11,18 +11,21 @@ ...@@ -11,18 +11,21 @@
#include <linux/smp.h> #include <linux/smp.h>
#define SPINLOCK_LOCKVAL (S390_lowcore.spinlock_lockval)
extern int spin_retry; extern int spin_retry;
static inline int static inline int
_raw_compare_and_swap(volatile unsigned int *lock, _raw_compare_and_swap(unsigned int *lock, unsigned int old, unsigned int new)
unsigned int old, unsigned int new)
{ {
unsigned int old_expected = old;
asm volatile( asm volatile(
" cs %0,%3,%1" " cs %0,%3,%1"
: "=d" (old), "=Q" (*lock) : "=d" (old), "=Q" (*lock)
: "0" (old), "d" (new), "Q" (*lock) : "0" (old), "d" (new), "Q" (*lock)
: "cc", "memory" ); : "cc", "memory" );
return old; return old == old_expected;
} }
/* /*
...@@ -34,57 +37,69 @@ _raw_compare_and_swap(volatile unsigned int *lock, ...@@ -34,57 +37,69 @@ _raw_compare_and_swap(volatile unsigned int *lock,
* (the type definitions are in asm/spinlock_types.h) * (the type definitions are in asm/spinlock_types.h)
*/ */
#define arch_spin_is_locked(x) ((x)->owner_cpu != 0) void arch_spin_lock_wait(arch_spinlock_t *);
#define arch_spin_unlock_wait(lock) \ int arch_spin_trylock_retry(arch_spinlock_t *);
do { while (arch_spin_is_locked(lock)) \ void arch_spin_relax(arch_spinlock_t *);
arch_spin_relax(lock); } while (0) void arch_spin_lock_wait_flags(arch_spinlock_t *, unsigned long flags);
extern void arch_spin_lock_wait(arch_spinlock_t *); static inline u32 arch_spin_lockval(int cpu)
extern void arch_spin_lock_wait_flags(arch_spinlock_t *, unsigned long flags); {
extern int arch_spin_trylock_retry(arch_spinlock_t *); return ~cpu;
extern void arch_spin_relax(arch_spinlock_t *lock); }
static inline int arch_spin_value_unlocked(arch_spinlock_t lock) static inline int arch_spin_value_unlocked(arch_spinlock_t lock)
{ {
return lock.owner_cpu == 0; return lock.lock == 0;
} }
static inline void arch_spin_lock(arch_spinlock_t *lp) static inline int arch_spin_is_locked(arch_spinlock_t *lp)
{ {
int old; return ACCESS_ONCE(lp->lock) != 0;
}
old = _raw_compare_and_swap(&lp->owner_cpu, 0, ~smp_processor_id()); static inline int arch_spin_trylock_once(arch_spinlock_t *lp)
if (likely(old == 0)) {
return; barrier();
arch_spin_lock_wait(lp); return likely(arch_spin_value_unlocked(*lp) &&
_raw_compare_and_swap(&lp->lock, 0, SPINLOCK_LOCKVAL));
} }
static inline void arch_spin_lock_flags(arch_spinlock_t *lp, static inline int arch_spin_tryrelease_once(arch_spinlock_t *lp)
unsigned long flags)
{ {
int old; return _raw_compare_and_swap(&lp->lock, SPINLOCK_LOCKVAL, 0);
}
old = _raw_compare_and_swap(&lp->owner_cpu, 0, ~smp_processor_id()); static inline void arch_spin_lock(arch_spinlock_t *lp)
if (likely(old == 0)) {
return; if (!arch_spin_trylock_once(lp))
arch_spin_lock_wait_flags(lp, flags); arch_spin_lock_wait(lp);
} }
static inline int arch_spin_trylock(arch_spinlock_t *lp) static inline void arch_spin_lock_flags(arch_spinlock_t *lp,
unsigned long flags)
{ {
int old; if (!arch_spin_trylock_once(lp))
arch_spin_lock_wait_flags(lp, flags);
}
old = _raw_compare_and_swap(&lp->owner_cpu, 0, ~smp_processor_id()); static inline int arch_spin_trylock(arch_spinlock_t *lp)
if (likely(old == 0)) {
return 1; if (!arch_spin_trylock_once(lp))
return arch_spin_trylock_retry(lp); return arch_spin_trylock_retry(lp);
return 1;
} }
static inline void arch_spin_unlock(arch_spinlock_t *lp) static inline void arch_spin_unlock(arch_spinlock_t *lp)
{ {
_raw_compare_and_swap(&lp->owner_cpu, lp->owner_cpu, 0); arch_spin_tryrelease_once(lp);
}
static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
{
while (arch_spin_is_locked(lock))
arch_spin_relax(lock);
} }
/* /*
* Read-write spinlocks, allowing multiple readers * Read-write spinlocks, allowing multiple readers
* but only one writer. * but only one writer.
...@@ -115,42 +130,50 @@ extern void _raw_write_lock_wait(arch_rwlock_t *lp); ...@@ -115,42 +130,50 @@ extern void _raw_write_lock_wait(arch_rwlock_t *lp);
extern void _raw_write_lock_wait_flags(arch_rwlock_t *lp, unsigned long flags); extern void _raw_write_lock_wait_flags(arch_rwlock_t *lp, unsigned long flags);
extern int _raw_write_trylock_retry(arch_rwlock_t *lp); extern int _raw_write_trylock_retry(arch_rwlock_t *lp);
static inline int arch_read_trylock_once(arch_rwlock_t *rw)
{
unsigned int old = ACCESS_ONCE(rw->lock);
return likely((int) old >= 0 &&
_raw_compare_and_swap(&rw->lock, old, old + 1));
}
static inline int arch_write_trylock_once(arch_rwlock_t *rw)
{
unsigned int old = ACCESS_ONCE(rw->lock);
return likely(old == 0 &&
_raw_compare_and_swap(&rw->lock, 0, 0x80000000));
}
static inline void arch_read_lock(arch_rwlock_t *rw) static inline void arch_read_lock(arch_rwlock_t *rw)
{ {
unsigned int old; if (!arch_read_trylock_once(rw))
old = rw->lock & 0x7fffffffU;
if (_raw_compare_and_swap(&rw->lock, old, old + 1) != old)
_raw_read_lock_wait(rw); _raw_read_lock_wait(rw);
} }
static inline void arch_read_lock_flags(arch_rwlock_t *rw, unsigned long flags) static inline void arch_read_lock_flags(arch_rwlock_t *rw, unsigned long flags)
{ {
unsigned int old; if (!arch_read_trylock_once(rw))
old = rw->lock & 0x7fffffffU;
if (_raw_compare_and_swap(&rw->lock, old, old + 1) != old)
_raw_read_lock_wait_flags(rw, flags); _raw_read_lock_wait_flags(rw, flags);
} }
static inline void arch_read_unlock(arch_rwlock_t *rw) static inline void arch_read_unlock(arch_rwlock_t *rw)
{ {
unsigned int old, cmp; unsigned int old;
old = rw->lock;
do { do {
cmp = old; old = ACCESS_ONCE(rw->lock);
old = _raw_compare_and_swap(&rw->lock, old, old - 1); } while (!_raw_compare_and_swap(&rw->lock, old, old - 1));
} while (cmp != old);
} }
static inline void arch_write_lock(arch_rwlock_t *rw) static inline void arch_write_lock(arch_rwlock_t *rw)
{ {
if (unlikely(_raw_compare_and_swap(&rw->lock, 0, 0x80000000) != 0)) if (!arch_write_trylock_once(rw))
_raw_write_lock_wait(rw); _raw_write_lock_wait(rw);
} }
static inline void arch_write_lock_flags(arch_rwlock_t *rw, unsigned long flags) static inline void arch_write_lock_flags(arch_rwlock_t *rw, unsigned long flags)
{ {
if (unlikely(_raw_compare_and_swap(&rw->lock, 0, 0x80000000) != 0)) if (!arch_write_trylock_once(rw))
_raw_write_lock_wait_flags(rw, flags); _raw_write_lock_wait_flags(rw, flags);
} }
...@@ -161,18 +184,16 @@ static inline void arch_write_unlock(arch_rwlock_t *rw) ...@@ -161,18 +184,16 @@ static inline void arch_write_unlock(arch_rwlock_t *rw)
static inline int arch_read_trylock(arch_rwlock_t *rw) static inline int arch_read_trylock(arch_rwlock_t *rw)
{ {
unsigned int old; if (!arch_read_trylock_once(rw))
old = rw->lock & 0x7fffffffU; return _raw_read_trylock_retry(rw);
if (likely(_raw_compare_and_swap(&rw->lock, old, old + 1) == old)) return 1;
return 1;
return _raw_read_trylock_retry(rw);
} }
static inline int arch_write_trylock(arch_rwlock_t *rw) static inline int arch_write_trylock(arch_rwlock_t *rw)
{ {
if (likely(_raw_compare_and_swap(&rw->lock, 0, 0x80000000) == 0)) if (!arch_write_trylock_once(rw))
return 1; return _raw_write_trylock_retry(rw);
return _raw_write_trylock_retry(rw); return 1;
} }
#define arch_read_relax(lock) cpu_relax() #define arch_read_relax(lock) cpu_relax()
......
...@@ -6,13 +6,13 @@ ...@@ -6,13 +6,13 @@
#endif #endif
typedef struct { typedef struct {
volatile unsigned int owner_cpu; unsigned int lock;
} __attribute__ ((aligned (4))) arch_spinlock_t; } __attribute__ ((aligned (4))) arch_spinlock_t;
#define __ARCH_SPIN_LOCK_UNLOCKED { 0 } #define __ARCH_SPIN_LOCK_UNLOCKED { .lock = 0, }
typedef struct { typedef struct {
volatile unsigned int lock; unsigned int lock;
} arch_rwlock_t; } arch_rwlock_t;
#define __ARCH_RW_LOCK_UNLOCKED { 0 } #define __ARCH_RW_LOCK_UNLOCKED { 0 }
......
...@@ -132,7 +132,6 @@ static inline void restore_access_regs(unsigned int *acrs) ...@@ -132,7 +132,6 @@ static inline void restore_access_regs(unsigned int *acrs)
update_cr_regs(next); \ update_cr_regs(next); \
} \ } \
prev = __switch_to(prev,next); \ prev = __switch_to(prev,next); \
update_primary_asce(current); \
} while (0) } while (0)
#define finish_arch_switch(prev) do { \ #define finish_arch_switch(prev) do { \
......
...@@ -28,7 +28,7 @@ extern const unsigned int sys_call_table_emu[]; ...@@ -28,7 +28,7 @@ extern const unsigned int sys_call_table_emu[];
static inline long syscall_get_nr(struct task_struct *task, static inline long syscall_get_nr(struct task_struct *task,
struct pt_regs *regs) struct pt_regs *regs)
{ {
return test_tsk_thread_flag(task, TIF_SYSCALL) ? return test_pt_regs_flag(regs, PIF_SYSCALL) ?
(regs->int_code & 0xffff) : -1; (regs->int_code & 0xffff) : -1;
} }
......
...@@ -77,32 +77,22 @@ static inline struct thread_info *current_thread_info(void) ...@@ -77,32 +77,22 @@ static inline struct thread_info *current_thread_info(void)
/* /*
* thread information flags bit numbers * thread information flags bit numbers
*/ */
#define TIF_SYSCALL 0 /* inside a system call */ #define TIF_NOTIFY_RESUME 0 /* callback before returning to user */
#define TIF_NOTIFY_RESUME 1 /* callback before returning to user */ #define TIF_SIGPENDING 1 /* signal pending */
#define TIF_SIGPENDING 2 /* signal pending */ #define TIF_NEED_RESCHED 2 /* rescheduling necessary */
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */ #define TIF_SYSCALL_TRACE 3 /* syscall trace active */
#define TIF_TLB_WAIT 4 /* wait for TLB flush completion */ #define TIF_SYSCALL_AUDIT 4 /* syscall auditing active */
#define TIF_ASCE 5 /* primary asce needs fixup / uaccess */ #define TIF_SECCOMP 5 /* secure computing */
#define TIF_PER_TRAP 6 /* deliver sigtrap on return to user */ #define TIF_SYSCALL_TRACEPOINT 6 /* syscall tracepoint instrumentation */
#define TIF_MCCK_PENDING 7 /* machine check handling is pending */ #define TIF_31BIT 16 /* 32bit process */
#define TIF_SYSCALL_TRACE 8 /* syscall trace active */ #define TIF_MEMDIE 17 /* is terminating due to OOM killer */
#define TIF_SYSCALL_AUDIT 9 /* syscall auditing active */ #define TIF_RESTORE_SIGMASK 18 /* restore signal mask in do_signal() */
#define TIF_SECCOMP 10 /* secure computing */ #define TIF_SINGLE_STEP 19 /* This task is single stepped */
#define TIF_SYSCALL_TRACEPOINT 11 /* syscall tracepoint instrumentation */ #define TIF_BLOCK_STEP 20 /* This task is block stepped */
#define TIF_31BIT 17 /* 32bit process */
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */
#define TIF_RESTORE_SIGMASK 19 /* restore signal mask in do_signal() */
#define TIF_SINGLE_STEP 20 /* This task is single stepped */
#define TIF_BLOCK_STEP 21 /* This task is block stepped */
#define _TIF_SYSCALL (1<<TIF_SYSCALL)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME) #define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
#define _TIF_SIGPENDING (1<<TIF_SIGPENDING) #define _TIF_SIGPENDING (1<<TIF_SIGPENDING)
#define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED) #define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED)
#define _TIF_TLB_WAIT (1<<TIF_TLB_WAIT)
#define _TIF_ASCE (1<<TIF_ASCE)
#define _TIF_PER_TRAP (1<<TIF_PER_TRAP)
#define _TIF_MCCK_PENDING (1<<TIF_MCCK_PENDING)
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT) #define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT)
#define _TIF_SECCOMP (1<<TIF_SECCOMP) #define _TIF_SECCOMP (1<<TIF_SECCOMP)
......
...@@ -132,6 +132,34 @@ unsigned long __must_check __copy_to_user(void __user *to, const void *from, ...@@ -132,6 +132,34 @@ unsigned long __must_check __copy_to_user(void __user *to, const void *from,
#define __copy_to_user_inatomic __copy_to_user #define __copy_to_user_inatomic __copy_to_user
#define __copy_from_user_inatomic __copy_from_user #define __copy_from_user_inatomic __copy_from_user
#ifdef CONFIG_HAVE_MARCH_Z10_FEATURES
#define __put_get_user_asm(to, from, size, spec) \
({ \
register unsigned long __reg0 asm("0") = spec; \
int __rc; \
\
asm volatile( \
"0: mvcos %1,%3,%2\n" \
"1: xr %0,%0\n" \
"2:\n" \
".pushsection .fixup, \"ax\"\n" \
"3: lhi %0,%5\n" \
" jg 2b\n" \
".popsection\n" \
EX_TABLE(0b,3b) EX_TABLE(1b,3b) \
: "=d" (__rc), "=Q" (*(to)) \
: "d" (size), "Q" (*(from)), \
"d" (__reg0), "K" (-EFAULT) \
: "cc"); \
__rc; \
})
#define __put_user_fn(x, ptr, size) __put_get_user_asm(ptr, x, size, 0x810000UL)
#define __get_user_fn(x, ptr, size) __put_get_user_asm(x, ptr, size, 0x81UL)
#else /* CONFIG_HAVE_MARCH_Z10_FEATURES */
static inline int __put_user_fn(void *x, void __user *ptr, unsigned long size) static inline int __put_user_fn(void *x, void __user *ptr, unsigned long size)
{ {
size = __copy_to_user(ptr, x, size); size = __copy_to_user(ptr, x, size);
...@@ -144,6 +172,8 @@ static inline int __get_user_fn(void *x, const void __user *ptr, unsigned long s ...@@ -144,6 +172,8 @@ static inline int __get_user_fn(void *x, const void __user *ptr, unsigned long s
return size ? -EFAULT : 0; return size ? -EFAULT : 0;
} }
#endif /* CONFIG_HAVE_MARCH_Z10_FEATURES */
/* /*
* These are the main single-value transfer routines. They automatically * These are the main single-value transfer routines. They automatically
* use the right size if we just have the right pointer type. * use the right size if we just have the right pointer type.
......
...@@ -50,6 +50,7 @@ int main(void) ...@@ -50,6 +50,7 @@ int main(void)
DEFINE(__PT_INT_CODE, offsetof(struct pt_regs, int_code)); DEFINE(__PT_INT_CODE, offsetof(struct pt_regs, int_code));
DEFINE(__PT_INT_PARM, offsetof(struct pt_regs, int_parm)); DEFINE(__PT_INT_PARM, offsetof(struct pt_regs, int_parm));
DEFINE(__PT_INT_PARM_LONG, offsetof(struct pt_regs, int_parm_long)); DEFINE(__PT_INT_PARM_LONG, offsetof(struct pt_regs, int_parm_long));
DEFINE(__PT_FLAGS, offsetof(struct pt_regs, flags));
DEFINE(__PT_SIZE, sizeof(struct pt_regs)); DEFINE(__PT_SIZE, sizeof(struct pt_regs));
BLANK(); BLANK();
DEFINE(__SF_BACKCHAIN, offsetof(struct stack_frame, back_chain)); DEFINE(__SF_BACKCHAIN, offsetof(struct stack_frame, back_chain));
...@@ -115,6 +116,7 @@ int main(void) ...@@ -115,6 +116,7 @@ int main(void)
DEFINE(__LC_SAVE_AREA_SYNC, offsetof(struct _lowcore, save_area_sync)); DEFINE(__LC_SAVE_AREA_SYNC, offsetof(struct _lowcore, save_area_sync));
DEFINE(__LC_SAVE_AREA_ASYNC, offsetof(struct _lowcore, save_area_async)); DEFINE(__LC_SAVE_AREA_ASYNC, offsetof(struct _lowcore, save_area_async));
DEFINE(__LC_SAVE_AREA_RESTART, offsetof(struct _lowcore, save_area_restart)); DEFINE(__LC_SAVE_AREA_RESTART, offsetof(struct _lowcore, save_area_restart));
DEFINE(__LC_CPU_FLAGS, offsetof(struct _lowcore, cpu_flags));
DEFINE(__LC_RETURN_PSW, offsetof(struct _lowcore, return_psw)); DEFINE(__LC_RETURN_PSW, offsetof(struct _lowcore, return_psw));
DEFINE(__LC_RETURN_MCCK_PSW, offsetof(struct _lowcore, return_mcck_psw)); DEFINE(__LC_RETURN_MCCK_PSW, offsetof(struct _lowcore, return_mcck_psw));
DEFINE(__LC_SYNC_ENTER_TIMER, offsetof(struct _lowcore, sync_enter_timer)); DEFINE(__LC_SYNC_ENTER_TIMER, offsetof(struct _lowcore, sync_enter_timer));
...@@ -142,7 +144,6 @@ int main(void) ...@@ -142,7 +144,6 @@ int main(void)
DEFINE(__LC_MCCK_CLOCK, offsetof(struct _lowcore, mcck_clock)); DEFINE(__LC_MCCK_CLOCK, offsetof(struct _lowcore, mcck_clock));
DEFINE(__LC_MACHINE_FLAGS, offsetof(struct _lowcore, machine_flags)); DEFINE(__LC_MACHINE_FLAGS, offsetof(struct _lowcore, machine_flags));
DEFINE(__LC_FTRACE_FUNC, offsetof(struct _lowcore, ftrace_func)); DEFINE(__LC_FTRACE_FUNC, offsetof(struct _lowcore, ftrace_func));
DEFINE(__LC_IRB, offsetof(struct _lowcore, irb));
DEFINE(__LC_DUMP_REIPL, offsetof(struct _lowcore, ipib)); DEFINE(__LC_DUMP_REIPL, offsetof(struct _lowcore, ipib));
BLANK(); BLANK();
DEFINE(__LC_CPU_TIMER_SAVE_AREA, offsetof(struct _lowcore, cpu_timer_save_area)); DEFINE(__LC_CPU_TIMER_SAVE_AREA, offsetof(struct _lowcore, cpu_timer_save_area));
......
...@@ -213,7 +213,7 @@ static int restore_sigregs32(struct pt_regs *regs,_sigregs32 __user *sregs) ...@@ -213,7 +213,7 @@ static int restore_sigregs32(struct pt_regs *regs,_sigregs32 __user *sregs)
sizeof(current->thread.fp_regs)); sizeof(current->thread.fp_regs));
restore_fp_regs(current->thread.fp_regs.fprs); restore_fp_regs(current->thread.fp_regs.fprs);
clear_thread_flag(TIF_SYSCALL); /* No longer in a system call */ clear_pt_regs_flag(regs, PIF_SYSCALL); /* No longer in a system call */
return 0; return 0;
} }
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/bootmem.h> #include <linux/bootmem.h>
#include <linux/elf.h> #include <linux/elf.h>
#include <linux/memblock.h>
#include <asm/os_info.h> #include <asm/os_info.h>
#include <asm/elf.h> #include <asm/elf.h>
#include <asm/ipl.h> #include <asm/ipl.h>
...@@ -22,6 +23,24 @@ ...@@ -22,6 +23,24 @@
#define PTR_SUB(x, y) (((char *) (x)) - ((unsigned long) (y))) #define PTR_SUB(x, y) (((char *) (x)) - ((unsigned long) (y)))
#define PTR_DIFF(x, y) ((unsigned long)(((char *) (x)) - ((unsigned long) (y)))) #define PTR_DIFF(x, y) ((unsigned long)(((char *) (x)) - ((unsigned long) (y))))
static struct memblock_region oldmem_region;
static struct memblock_type oldmem_type = {
.cnt = 1,
.max = 1,
.total_size = 0,
.regions = &oldmem_region,
};
#define for_each_dump_mem_range(i, nid, p_start, p_end, p_nid) \
for (i = 0, __next_mem_range(&i, nid, &memblock.physmem, \
&oldmem_type, p_start, \
p_end, p_nid); \
i != (u64)ULLONG_MAX; \
__next_mem_range(&i, nid, &memblock.physmem, \
&oldmem_type, \
p_start, p_end, p_nid))
struct dump_save_areas dump_save_areas; struct dump_save_areas dump_save_areas;
/* /*
...@@ -263,19 +282,6 @@ static void *kzalloc_panic(int len) ...@@ -263,19 +282,6 @@ static void *kzalloc_panic(int len)
return rc; return rc;
} }
/*
* Get memory layout and create hole for oldmem
*/
static struct mem_chunk *get_memory_layout(void)
{
struct mem_chunk *chunk_array;
chunk_array = kzalloc_panic(MEMORY_CHUNKS * sizeof(struct mem_chunk));
detect_memory_layout(chunk_array, 0);
create_mem_hole(chunk_array, OLDMEM_BASE, OLDMEM_SIZE);
return chunk_array;
}
/* /*
* Initialize ELF note * Initialize ELF note
*/ */
...@@ -490,52 +496,33 @@ static int get_cpu_cnt(void) ...@@ -490,52 +496,33 @@ static int get_cpu_cnt(void)
*/ */
static int get_mem_chunk_cnt(void) static int get_mem_chunk_cnt(void)
{ {
struct mem_chunk *chunk_array, *mem_chunk; int cnt = 0;
int i, cnt = 0; u64 idx;
chunk_array = get_memory_layout(); for_each_dump_mem_range(idx, NUMA_NO_NODE, NULL, NULL, NULL)
for (i = 0; i < MEMORY_CHUNKS; i++) {
mem_chunk = &chunk_array[i];
if (chunk_array[i].type != CHUNK_READ_WRITE &&
chunk_array[i].type != CHUNK_READ_ONLY)
continue;
if (mem_chunk->size == 0)
continue;
cnt++; cnt++;
}
kfree(chunk_array);
return cnt; return cnt;
} }
/* /*
* Initialize ELF loads (new kernel) * Initialize ELF loads (new kernel)
*/ */
static int loads_init(Elf64_Phdr *phdr, u64 loads_offset) static void loads_init(Elf64_Phdr *phdr, u64 loads_offset)
{ {
struct mem_chunk *chunk_array, *mem_chunk; phys_addr_t start, end;
int i; u64 idx;
chunk_array = get_memory_layout(); for_each_dump_mem_range(idx, NUMA_NO_NODE, &start, &end, NULL) {
for (i = 0; i < MEMORY_CHUNKS; i++) { phdr->p_filesz = end - start;
mem_chunk = &chunk_array[i];
if (mem_chunk->size == 0)
continue;
if (chunk_array[i].type != CHUNK_READ_WRITE &&
chunk_array[i].type != CHUNK_READ_ONLY)
continue;
else
phdr->p_filesz = mem_chunk->size;
phdr->p_type = PT_LOAD; phdr->p_type = PT_LOAD;
phdr->p_offset = mem_chunk->addr; phdr->p_offset = start;
phdr->p_vaddr = mem_chunk->addr; phdr->p_vaddr = start;
phdr->p_paddr = mem_chunk->addr; phdr->p_paddr = start;
phdr->p_memsz = mem_chunk->size; phdr->p_memsz = end - start;
phdr->p_flags = PF_R | PF_W | PF_X; phdr->p_flags = PF_R | PF_W | PF_X;
phdr->p_align = PAGE_SIZE; phdr->p_align = PAGE_SIZE;
phdr++; phdr++;
} }
kfree(chunk_array);
return i;
} }
/* /*
...@@ -584,6 +571,14 @@ int elfcorehdr_alloc(unsigned long long *addr, unsigned long long *size) ...@@ -584,6 +571,14 @@ int elfcorehdr_alloc(unsigned long long *addr, unsigned long long *size)
/* If we cannot get HSA size for zfcpdump return error */ /* If we cannot get HSA size for zfcpdump return error */
if (ipl_info.type == IPL_TYPE_FCP_DUMP && !sclp_get_hsa_size()) if (ipl_info.type == IPL_TYPE_FCP_DUMP && !sclp_get_hsa_size())
return -ENODEV; return -ENODEV;
/* For kdump, exclude previous crashkernel memory */
if (OLDMEM_BASE) {
oldmem_region.base = OLDMEM_BASE;
oldmem_region.size = OLDMEM_SIZE;
oldmem_type.total_size = OLDMEM_SIZE;
}
mem_chunk_cnt = get_mem_chunk_cnt(); mem_chunk_cnt = get_mem_chunk_cnt();
alloc_size = 0x1000 + get_cpu_cnt() * 0x300 + alloc_size = 0x1000 + get_cpu_cnt() * 0x300 +
......
...@@ -258,13 +258,19 @@ static __init void setup_topology(void) ...@@ -258,13 +258,19 @@ static __init void setup_topology(void)
static void early_pgm_check_handler(void) static void early_pgm_check_handler(void)
{ {
const struct exception_table_entry *fixup; const struct exception_table_entry *fixup;
unsigned long cr0, cr0_new;
unsigned long addr; unsigned long addr;
addr = S390_lowcore.program_old_psw.addr; addr = S390_lowcore.program_old_psw.addr;
fixup = search_exception_tables(addr & PSW_ADDR_INSN); fixup = search_exception_tables(addr & PSW_ADDR_INSN);
if (!fixup) if (!fixup)
disabled_wait(0); disabled_wait(0);
/* Disable low address protection before storing into lowcore. */
__ctl_store(cr0, 0, 0);
cr0_new = cr0 & ~(1UL << 28);
__ctl_load(cr0_new, 0, 0);
S390_lowcore.program_old_psw.addr = extable_fixup(fixup)|PSW_ADDR_AMODE; S390_lowcore.program_old_psw.addr = extable_fixup(fixup)|PSW_ADDR_AMODE;
__ctl_load(cr0, 0, 0);
} }
static noinline __init void setup_lowcore_early(void) static noinline __init void setup_lowcore_early(void)
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/processor.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/errno.h> #include <asm/errno.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
...@@ -37,18 +38,16 @@ __PT_R13 = __PT_GPRS + 524 ...@@ -37,18 +38,16 @@ __PT_R13 = __PT_GPRS + 524
__PT_R14 = __PT_GPRS + 56 __PT_R14 = __PT_GPRS + 56
__PT_R15 = __PT_GPRS + 60 __PT_R15 = __PT_GPRS + 60
_TIF_WORK_SVC = (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED | \
_TIF_MCCK_PENDING | _TIF_PER_TRAP | _TIF_ASCE)
_TIF_WORK_INT = (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED | \
_TIF_MCCK_PENDING | _TIF_ASCE)
_TIF_TRACE = (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | _TIF_SECCOMP | \
_TIF_SYSCALL_TRACEPOINT)
_TIF_TRANSFER = (_TIF_MCCK_PENDING | _TIF_TLB_WAIT)
STACK_SHIFT = PAGE_SHIFT + THREAD_ORDER STACK_SHIFT = PAGE_SHIFT + THREAD_ORDER
STACK_SIZE = 1 << STACK_SHIFT STACK_SIZE = 1 << STACK_SHIFT
STACK_INIT = STACK_SIZE - STACK_FRAME_OVERHEAD - __PT_SIZE STACK_INIT = STACK_SIZE - STACK_FRAME_OVERHEAD - __PT_SIZE
_TIF_WORK = (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED)
_TIF_TRACE = (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | _TIF_SECCOMP | \
_TIF_SYSCALL_TRACEPOINT)
_CIF_WORK = (_CIF_MCCK_PENDING | _CIF_ASCE)
_PIF_WORK = (_PIF_PER_TRAP)
#define BASED(name) name-system_call(%r13) #define BASED(name) name-system_call(%r13)
.macro TRACE_IRQS_ON .macro TRACE_IRQS_ON
...@@ -160,13 +159,7 @@ ENTRY(__switch_to) ...@@ -160,13 +159,7 @@ ENTRY(__switch_to)
lctl %c4,%c4,__TASK_pid(%r3) # load pid to control reg. 4 lctl %c4,%c4,__TASK_pid(%r3) # load pid to control reg. 4
mvc __LC_CURRENT_PID(4,%r0),__TASK_pid(%r3) # store pid of next mvc __LC_CURRENT_PID(4,%r0),__TASK_pid(%r3) # store pid of next
l %r15,__THREAD_ksp(%r3) # load kernel stack of next l %r15,__THREAD_ksp(%r3) # load kernel stack of next
lhi %r6,_TIF_TRANSFER # transfer TIF bits lm %r6,%r15,__SF_GPRS(%r15) # load gprs of next task
n %r6,__TI_flags(%r4) # isolate TIF bits
jz 0f
o %r6,__TI_flags(%r5) # set TIF bits of next
st %r6,__TI_flags(%r5)
ni __TI_flags+3(%r4),255-_TIF_TRANSFER # clear TIF bits of prev
0: lm %r6,%r15,__SF_GPRS(%r15) # load gprs of next task
br %r14 br %r14
__critical_start: __critical_start:
...@@ -181,6 +174,7 @@ sysc_stm: ...@@ -181,6 +174,7 @@ sysc_stm:
stm %r8,%r15,__LC_SAVE_AREA_SYNC stm %r8,%r15,__LC_SAVE_AREA_SYNC
l %r12,__LC_THREAD_INFO l %r12,__LC_THREAD_INFO
l %r13,__LC_SVC_NEW_PSW+4 l %r13,__LC_SVC_NEW_PSW+4
lhi %r14,_PIF_SYSCALL
sysc_per: sysc_per:
l %r15,__LC_KERNEL_STACK l %r15,__LC_KERNEL_STACK
la %r11,STACK_FRAME_OVERHEAD(%r15) # pointer to pt_regs la %r11,STACK_FRAME_OVERHEAD(%r15) # pointer to pt_regs
...@@ -190,8 +184,8 @@ sysc_vtime: ...@@ -190,8 +184,8 @@ sysc_vtime:
mvc __PT_R8(32,%r11),__LC_SAVE_AREA_SYNC mvc __PT_R8(32,%r11),__LC_SAVE_AREA_SYNC
mvc __PT_PSW(8,%r11),__LC_SVC_OLD_PSW mvc __PT_PSW(8,%r11),__LC_SVC_OLD_PSW
mvc __PT_INT_CODE(4,%r11),__LC_SVC_ILC mvc __PT_INT_CODE(4,%r11),__LC_SVC_ILC
st %r14,__PT_FLAGS(%r11)
sysc_do_svc: sysc_do_svc:
oi __TI_flags+3(%r12),_TIF_SYSCALL
l %r10,__TI_sysc_table(%r12) # 31 bit system call table l %r10,__TI_sysc_table(%r12) # 31 bit system call table
lh %r8,__PT_INT_CODE+2(%r11) lh %r8,__PT_INT_CODE+2(%r11)
sla %r8,2 # shift and test for svc0 sla %r8,2 # shift and test for svc0
...@@ -207,7 +201,7 @@ sysc_nr_ok: ...@@ -207,7 +201,7 @@ sysc_nr_ok:
st %r2,__PT_ORIG_GPR2(%r11) st %r2,__PT_ORIG_GPR2(%r11)
st %r7,STACK_FRAME_OVERHEAD(%r15) st %r7,STACK_FRAME_OVERHEAD(%r15)
l %r9,0(%r8,%r10) # get system call addr. l %r9,0(%r8,%r10) # get system call addr.
tm __TI_flags+2(%r12),_TIF_TRACE >> 8 tm __TI_flags+3(%r12),_TIF_TRACE
jnz sysc_tracesys jnz sysc_tracesys
basr %r14,%r9 # call sys_xxxx basr %r14,%r9 # call sys_xxxx
st %r2,__PT_R2(%r11) # store return value st %r2,__PT_R2(%r11) # store return value
...@@ -217,9 +211,12 @@ sysc_return: ...@@ -217,9 +211,12 @@ sysc_return:
sysc_tif: sysc_tif:
tm __PT_PSW+1(%r11),0x01 # returning to user ? tm __PT_PSW+1(%r11),0x01 # returning to user ?
jno sysc_restore jno sysc_restore
tm __TI_flags+3(%r12),_TIF_WORK_SVC tm __PT_FLAGS+3(%r11),_PIF_WORK
jnz sysc_work # check for work jnz sysc_work
ni __TI_flags+3(%r12),255-_TIF_SYSCALL tm __TI_flags+3(%r12),_TIF_WORK
jnz sysc_work # check for thread work
tm __LC_CPU_FLAGS+3,_CIF_WORK
jnz sysc_work
sysc_restore: sysc_restore:
mvc __LC_RETURN_PSW(8),__PT_PSW(%r11) mvc __LC_RETURN_PSW(8),__PT_PSW(%r11)
stpt __LC_EXIT_TIMER stpt __LC_EXIT_TIMER
...@@ -231,17 +228,17 @@ sysc_done: ...@@ -231,17 +228,17 @@ sysc_done:
# One of the work bits is on. Find out which one. # One of the work bits is on. Find out which one.
# #
sysc_work: sysc_work:
tm __TI_flags+3(%r12),_TIF_MCCK_PENDING tm __LC_CPU_FLAGS+3,_CIF_MCCK_PENDING
jo sysc_mcck_pending jo sysc_mcck_pending
tm __TI_flags+3(%r12),_TIF_NEED_RESCHED tm __TI_flags+3(%r12),_TIF_NEED_RESCHED
jo sysc_reschedule jo sysc_reschedule
tm __TI_flags+3(%r12),_TIF_PER_TRAP tm __PT_FLAGS+3(%r11),_PIF_PER_TRAP
jo sysc_singlestep jo sysc_singlestep
tm __TI_flags+3(%r12),_TIF_SIGPENDING tm __TI_flags+3(%r12),_TIF_SIGPENDING
jo sysc_sigpending jo sysc_sigpending
tm __TI_flags+3(%r12),_TIF_NOTIFY_RESUME tm __TI_flags+3(%r12),_TIF_NOTIFY_RESUME
jo sysc_notify_resume jo sysc_notify_resume
tm __TI_flags+3(%r12),_TIF_ASCE tm __LC_CPU_FLAGS+3,_CIF_ASCE
jo sysc_uaccess jo sysc_uaccess
j sysc_return # beware of critical section cleanup j sysc_return # beware of critical section cleanup
...@@ -254,7 +251,7 @@ sysc_reschedule: ...@@ -254,7 +251,7 @@ sysc_reschedule:
br %r1 # call schedule br %r1 # call schedule
# #
# _TIF_MCCK_PENDING is set, call handler # _CIF_MCCK_PENDING is set, call handler
# #
sysc_mcck_pending: sysc_mcck_pending:
l %r1,BASED(.Lhandle_mcck) l %r1,BASED(.Lhandle_mcck)
...@@ -262,10 +259,10 @@ sysc_mcck_pending: ...@@ -262,10 +259,10 @@ sysc_mcck_pending:
br %r1 # TIF bit will be cleared by handler br %r1 # TIF bit will be cleared by handler
# #
# _TIF_ASCE is set, load user space asce # _CIF_ASCE is set, load user space asce
# #
sysc_uaccess: sysc_uaccess:
ni __TI_flags+3(%r12),255-_TIF_ASCE ni __LC_CPU_FLAGS+3,255-_CIF_ASCE
lctl %c1,%c1,__LC_USER_ASCE # load primary asce lctl %c1,%c1,__LC_USER_ASCE # load primary asce
j sysc_return j sysc_return
...@@ -276,7 +273,7 @@ sysc_sigpending: ...@@ -276,7 +273,7 @@ sysc_sigpending:
lr %r2,%r11 # pass pointer to pt_regs lr %r2,%r11 # pass pointer to pt_regs
l %r1,BASED(.Ldo_signal) l %r1,BASED(.Ldo_signal)
basr %r14,%r1 # call do_signal basr %r14,%r1 # call do_signal
tm __TI_flags+3(%r12),_TIF_SYSCALL tm __PT_FLAGS+3(%r11),_PIF_SYSCALL
jno sysc_return jno sysc_return
lm %r2,%r7,__PT_R2(%r11) # load svc arguments lm %r2,%r7,__PT_R2(%r11) # load svc arguments
l %r10,__TI_sysc_table(%r12) # 31 bit system call table l %r10,__TI_sysc_table(%r12) # 31 bit system call table
...@@ -297,10 +294,10 @@ sysc_notify_resume: ...@@ -297,10 +294,10 @@ sysc_notify_resume:
br %r1 # call do_notify_resume br %r1 # call do_notify_resume
# #
# _TIF_PER_TRAP is set, call do_per_trap # _PIF_PER_TRAP is set, call do_per_trap
# #
sysc_singlestep: sysc_singlestep:
ni __TI_flags+3(%r12),255-_TIF_PER_TRAP ni __PT_FLAGS+3(%r11),255-_PIF_PER_TRAP
lr %r2,%r11 # pass pointer to pt_regs lr %r2,%r11 # pass pointer to pt_regs
l %r1,BASED(.Ldo_per_trap) l %r1,BASED(.Ldo_per_trap)
la %r14,BASED(sysc_return) la %r14,BASED(sysc_return)
...@@ -330,7 +327,7 @@ sysc_tracego: ...@@ -330,7 +327,7 @@ sysc_tracego:
basr %r14,%r9 # call sys_xxx basr %r14,%r9 # call sys_xxx
st %r2,__PT_R2(%r11) # store return value st %r2,__PT_R2(%r11) # store return value
sysc_tracenogo: sysc_tracenogo:
tm __TI_flags+2(%r12),_TIF_TRACE >> 8 tm __TI_flags+3(%r12),_TIF_TRACE
jz sysc_return jz sysc_return
l %r1,BASED(.Ltrace_exit) l %r1,BASED(.Ltrace_exit)
lr %r2,%r11 # pass pointer to pt_regs lr %r2,%r11 # pass pointer to pt_regs
...@@ -384,12 +381,13 @@ ENTRY(pgm_check_handler) ...@@ -384,12 +381,13 @@ ENTRY(pgm_check_handler)
stm %r8,%r9,__PT_PSW(%r11) stm %r8,%r9,__PT_PSW(%r11)
mvc __PT_INT_CODE(4,%r11),__LC_PGM_ILC mvc __PT_INT_CODE(4,%r11),__LC_PGM_ILC
mvc __PT_INT_PARM_LONG(4,%r11),__LC_TRANS_EXC_CODE mvc __PT_INT_PARM_LONG(4,%r11),__LC_TRANS_EXC_CODE
xc __PT_FLAGS(4,%r11),__PT_FLAGS(%r11)
tm __LC_PGM_ILC+3,0x80 # check for per exception tm __LC_PGM_ILC+3,0x80 # check for per exception
jz 0f jz 0f
l %r1,__TI_task(%r12) l %r1,__TI_task(%r12)
tmh %r8,0x0001 # kernel per event ? tmh %r8,0x0001 # kernel per event ?
jz pgm_kprobe jz pgm_kprobe
oi __TI_flags+3(%r12),_TIF_PER_TRAP oi __PT_FLAGS+3(%r11),_PIF_PER_TRAP
mvc __THREAD_per_address(4,%r1),__LC_PER_ADDRESS mvc __THREAD_per_address(4,%r1),__LC_PER_ADDRESS
mvc __THREAD_per_cause(2,%r1),__LC_PER_CAUSE mvc __THREAD_per_cause(2,%r1),__LC_PER_CAUSE
mvc __THREAD_per_paid(1,%r1),__LC_PER_PAID mvc __THREAD_per_paid(1,%r1),__LC_PER_PAID
...@@ -420,9 +418,9 @@ pgm_kprobe: ...@@ -420,9 +418,9 @@ pgm_kprobe:
# single stepped system call # single stepped system call
# #
pgm_svcper: pgm_svcper:
oi __TI_flags+3(%r12),_TIF_PER_TRAP
mvc __LC_RETURN_PSW(4),__LC_SVC_NEW_PSW mvc __LC_RETURN_PSW(4),__LC_SVC_NEW_PSW
mvc __LC_RETURN_PSW+4(4),BASED(.Lsysc_per) mvc __LC_RETURN_PSW+4(4),BASED(.Lsysc_per)
lhi %r14,_PIF_SYSCALL | _PIF_PER_TRAP
lpsw __LC_RETURN_PSW # branch to sysc_per and enable irqs lpsw __LC_RETURN_PSW # branch to sysc_per and enable irqs
/* /*
...@@ -445,6 +443,7 @@ io_skip: ...@@ -445,6 +443,7 @@ io_skip:
mvc __PT_R8(32,%r11),__LC_SAVE_AREA_ASYNC mvc __PT_R8(32,%r11),__LC_SAVE_AREA_ASYNC
stm %r8,%r9,__PT_PSW(%r11) stm %r8,%r9,__PT_PSW(%r11)
mvc __PT_INT_CODE(12,%r11),__LC_SUBCHANNEL_ID mvc __PT_INT_CODE(12,%r11),__LC_SUBCHANNEL_ID
xc __PT_FLAGS(4,%r11),__PT_FLAGS(%r11)
TRACE_IRQS_OFF TRACE_IRQS_OFF
xc __SF_BACKCHAIN(4,%r15),__SF_BACKCHAIN(%r15) xc __SF_BACKCHAIN(4,%r15),__SF_BACKCHAIN(%r15)
io_loop: io_loop:
...@@ -466,8 +465,10 @@ io_return: ...@@ -466,8 +465,10 @@ io_return:
LOCKDEP_SYS_EXIT LOCKDEP_SYS_EXIT
TRACE_IRQS_ON TRACE_IRQS_ON
io_tif: io_tif:
tm __TI_flags+3(%r12),_TIF_WORK_INT tm __TI_flags+3(%r12),_TIF_WORK
jnz io_work # there is work to do (signals etc.) jnz io_work # there is work to do (signals etc.)
tm __LC_CPU_FLAGS+3,_CIF_WORK
jnz io_work
io_restore: io_restore:
mvc __LC_RETURN_PSW(8),__PT_PSW(%r11) mvc __LC_RETURN_PSW(8),__PT_PSW(%r11)
stpt __LC_EXIT_TIMER stpt __LC_EXIT_TIMER
...@@ -477,7 +478,7 @@ io_done: ...@@ -477,7 +478,7 @@ io_done:
# #
# There is work todo, find out in which context we have been interrupted: # There is work todo, find out in which context we have been interrupted:
# 1) if we return to user space we can do all _TIF_WORK_INT work # 1) if we return to user space we can do all _TIF_WORK work
# 2) if we return to kernel code and preemptive scheduling is enabled check # 2) if we return to kernel code and preemptive scheduling is enabled check
# the preemption counter and if it is zero call preempt_schedule_irq # the preemption counter and if it is zero call preempt_schedule_irq
# Before any work can be done, a switch to the kernel stack is required. # Before any work can be done, a switch to the kernel stack is required.
...@@ -520,11 +521,9 @@ io_work_user: ...@@ -520,11 +521,9 @@ io_work_user:
# #
# One of the work bits is on. Find out which one. # One of the work bits is on. Find out which one.
# Checked are: _TIF_SIGPENDING, _TIF_NOTIFY_RESUME, _TIF_NEED_RESCHED
# and _TIF_MCCK_PENDING
# #
io_work_tif: io_work_tif:
tm __TI_flags+3(%r12),_TIF_MCCK_PENDING tm __LC_CPU_FLAGS+3(%r12),_CIF_MCCK_PENDING
jo io_mcck_pending jo io_mcck_pending
tm __TI_flags+3(%r12),_TIF_NEED_RESCHED tm __TI_flags+3(%r12),_TIF_NEED_RESCHED
jo io_reschedule jo io_reschedule
...@@ -532,12 +531,12 @@ io_work_tif: ...@@ -532,12 +531,12 @@ io_work_tif:
jo io_sigpending jo io_sigpending
tm __TI_flags+3(%r12),_TIF_NOTIFY_RESUME tm __TI_flags+3(%r12),_TIF_NOTIFY_RESUME
jo io_notify_resume jo io_notify_resume
tm __TI_flags+3(%r12),_TIF_ASCE tm __LC_CPU_FLAGS+3,_CIF_ASCE
jo io_uaccess jo io_uaccess
j io_return # beware of critical section cleanup j io_return # beware of critical section cleanup
# #
# _TIF_MCCK_PENDING is set, call handler # _CIF_MCCK_PENDING is set, call handler
# #
io_mcck_pending: io_mcck_pending:
# TRACE_IRQS_ON already done at io_return # TRACE_IRQS_ON already done at io_return
...@@ -547,10 +546,10 @@ io_mcck_pending: ...@@ -547,10 +546,10 @@ io_mcck_pending:
j io_return j io_return
# #
# _TIF_ASCE is set, load user space asce # _CIF_ASCE is set, load user space asce
# #
io_uaccess: io_uaccess:
ni __TI_flags+3(%r12),255-_TIF_ASCE ni __LC_CPU_FLAGS+3,255-_CIF_ASCE
lctl %c1,%c1,__LC_USER_ASCE # load primary asce lctl %c1,%c1,__LC_USER_ASCE # load primary asce
j io_return j io_return
...@@ -613,6 +612,7 @@ ext_skip: ...@@ -613,6 +612,7 @@ ext_skip:
stm %r8,%r9,__PT_PSW(%r11) stm %r8,%r9,__PT_PSW(%r11)
mvc __PT_INT_CODE(4,%r11),__LC_EXT_CPU_ADDR mvc __PT_INT_CODE(4,%r11),__LC_EXT_CPU_ADDR
mvc __PT_INT_PARM(4,%r11),__LC_EXT_PARAMS mvc __PT_INT_PARM(4,%r11),__LC_EXT_PARAMS
xc __PT_FLAGS(4,%r11),__PT_FLAGS(%r11)
TRACE_IRQS_OFF TRACE_IRQS_OFF
l %r1,BASED(.Ldo_IRQ) l %r1,BASED(.Ldo_IRQ)
lr %r2,%r11 # pass pointer to pt_regs lr %r2,%r11 # pass pointer to pt_regs
...@@ -677,6 +677,7 @@ mcck_skip: ...@@ -677,6 +677,7 @@ mcck_skip:
stm %r0,%r7,__PT_R0(%r11) stm %r0,%r7,__PT_R0(%r11)
mvc __PT_R8(32,%r11),__LC_GPREGS_SAVE_AREA+32 mvc __PT_R8(32,%r11),__LC_GPREGS_SAVE_AREA+32
stm %r8,%r9,__PT_PSW(%r11) stm %r8,%r9,__PT_PSW(%r11)
xc __PT_FLAGS(4,%r11),__PT_FLAGS(%r11)
xc __SF_BACKCHAIN(4,%r15),__SF_BACKCHAIN(%r15) xc __SF_BACKCHAIN(4,%r15),__SF_BACKCHAIN(%r15)
l %r1,BASED(.Ldo_machine_check) l %r1,BASED(.Ldo_machine_check)
lr %r2,%r11 # pass pointer to pt_regs lr %r2,%r11 # pass pointer to pt_regs
...@@ -689,7 +690,7 @@ mcck_skip: ...@@ -689,7 +690,7 @@ mcck_skip:
la %r11,STACK_FRAME_OVERHEAD(%r15) la %r11,STACK_FRAME_OVERHEAD(%r15)
lr %r15,%r1 lr %r15,%r1
ssm __LC_PGM_NEW_PSW # turn dat on, keep irqs off ssm __LC_PGM_NEW_PSW # turn dat on, keep irqs off
tm __TI_flags+3(%r12),_TIF_MCCK_PENDING tm __LC_CPU_FLAGS+3,_CIF_MCCK_PENDING
jno mcck_return jno mcck_return
TRACE_IRQS_OFF TRACE_IRQS_OFF
l %r1,BASED(.Lhandle_mcck) l %r1,BASED(.Lhandle_mcck)
...@@ -842,6 +843,8 @@ cleanup_system_call: ...@@ -842,6 +843,8 @@ cleanup_system_call:
stm %r0,%r7,__PT_R0(%r9) stm %r0,%r7,__PT_R0(%r9)
mvc __PT_PSW(8,%r9),__LC_SVC_OLD_PSW mvc __PT_PSW(8,%r9),__LC_SVC_OLD_PSW
mvc __PT_INT_CODE(4,%r9),__LC_SVC_ILC mvc __PT_INT_CODE(4,%r9),__LC_SVC_ILC
xc __PT_FLAGS(4,%r9),__PT_FLAGS(%r9)
mvi __PT_FLAGS+3(%r9),_PIF_SYSCALL
# setup saved register 15 # setup saved register 15
st %r15,28(%r11) # r15 stack pointer st %r15,28(%r11) # r15 stack pointer
# set new psw address and exit # set new psw address and exit
......
...@@ -42,13 +42,11 @@ STACK_SHIFT = PAGE_SHIFT + THREAD_ORDER ...@@ -42,13 +42,11 @@ STACK_SHIFT = PAGE_SHIFT + THREAD_ORDER
STACK_SIZE = 1 << STACK_SHIFT STACK_SIZE = 1 << STACK_SHIFT
STACK_INIT = STACK_SIZE - STACK_FRAME_OVERHEAD - __PT_SIZE STACK_INIT = STACK_SIZE - STACK_FRAME_OVERHEAD - __PT_SIZE
_TIF_WORK_SVC = (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED | \ _TIF_WORK = (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED)
_TIF_MCCK_PENDING | _TIF_PER_TRAP | _TIF_ASCE) _TIF_TRACE = (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | _TIF_SECCOMP | \
_TIF_WORK_INT = (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED | \ _TIF_SYSCALL_TRACEPOINT)
_TIF_MCCK_PENDING | _TIF_ASCE) _CIF_WORK = (_CIF_MCCK_PENDING | _CIF_ASCE)
_TIF_TRACE = (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | _TIF_SECCOMP | \ _PIF_WORK = (_PIF_PER_TRAP)
_TIF_SYSCALL_TRACEPOINT)
_TIF_TRANSFER = (_TIF_MCCK_PENDING | _TIF_TLB_WAIT)
#define BASED(name) name-system_call(%r13) #define BASED(name) name-system_call(%r13)
...@@ -190,13 +188,7 @@ ENTRY(__switch_to) ...@@ -190,13 +188,7 @@ ENTRY(__switch_to)
lctl %c4,%c4,__TASK_pid(%r3) # load pid to control reg. 4 lctl %c4,%c4,__TASK_pid(%r3) # load pid to control reg. 4
mvc __LC_CURRENT_PID+4(4,%r0),__TASK_pid(%r3) # store pid of next mvc __LC_CURRENT_PID+4(4,%r0),__TASK_pid(%r3) # store pid of next
lg %r15,__THREAD_ksp(%r3) # load kernel stack of next lg %r15,__THREAD_ksp(%r3) # load kernel stack of next
llill %r6,_TIF_TRANSFER # transfer TIF bits lmg %r6,%r15,__SF_GPRS(%r15) # load gprs of next task
ng %r6,__TI_flags(%r4) # isolate TIF bits
jz 0f
og %r6,__TI_flags(%r5) # set TIF bits of next
stg %r6,__TI_flags(%r5)
ni __TI_flags+7(%r4),255-_TIF_TRANSFER # clear TIF bits of prev
0: lmg %r6,%r15,__SF_GPRS(%r15) # load gprs of next task
br %r14 br %r14
__critical_start: __critical_start:
...@@ -211,6 +203,7 @@ sysc_stmg: ...@@ -211,6 +203,7 @@ sysc_stmg:
stmg %r8,%r15,__LC_SAVE_AREA_SYNC stmg %r8,%r15,__LC_SAVE_AREA_SYNC
lg %r10,__LC_LAST_BREAK lg %r10,__LC_LAST_BREAK
lg %r12,__LC_THREAD_INFO lg %r12,__LC_THREAD_INFO
lghi %r14,_PIF_SYSCALL
sysc_per: sysc_per:
lg %r15,__LC_KERNEL_STACK lg %r15,__LC_KERNEL_STACK
la %r11,STACK_FRAME_OVERHEAD(%r15) # pointer to pt_regs la %r11,STACK_FRAME_OVERHEAD(%r15) # pointer to pt_regs
...@@ -221,8 +214,8 @@ sysc_vtime: ...@@ -221,8 +214,8 @@ sysc_vtime:
mvc __PT_R8(64,%r11),__LC_SAVE_AREA_SYNC mvc __PT_R8(64,%r11),__LC_SAVE_AREA_SYNC
mvc __PT_PSW(16,%r11),__LC_SVC_OLD_PSW mvc __PT_PSW(16,%r11),__LC_SVC_OLD_PSW
mvc __PT_INT_CODE(4,%r11),__LC_SVC_ILC mvc __PT_INT_CODE(4,%r11),__LC_SVC_ILC
stg %r14,__PT_FLAGS(%r11)
sysc_do_svc: sysc_do_svc:
oi __TI_flags+7(%r12),_TIF_SYSCALL
lg %r10,__TI_sysc_table(%r12) # address of system call table lg %r10,__TI_sysc_table(%r12) # address of system call table
llgh %r8,__PT_INT_CODE+2(%r11) llgh %r8,__PT_INT_CODE+2(%r11)
slag %r8,%r8,2 # shift and test for svc 0 slag %r8,%r8,2 # shift and test for svc 0
...@@ -238,7 +231,7 @@ sysc_nr_ok: ...@@ -238,7 +231,7 @@ sysc_nr_ok:
stg %r2,__PT_ORIG_GPR2(%r11) stg %r2,__PT_ORIG_GPR2(%r11)
stg %r7,STACK_FRAME_OVERHEAD(%r15) stg %r7,STACK_FRAME_OVERHEAD(%r15)
lgf %r9,0(%r8,%r10) # get system call add. lgf %r9,0(%r8,%r10) # get system call add.
tm __TI_flags+6(%r12),_TIF_TRACE >> 8 tm __TI_flags+7(%r12),_TIF_TRACE
jnz sysc_tracesys jnz sysc_tracesys
basr %r14,%r9 # call sys_xxxx basr %r14,%r9 # call sys_xxxx
stg %r2,__PT_R2(%r11) # store return value stg %r2,__PT_R2(%r11) # store return value
...@@ -248,9 +241,12 @@ sysc_return: ...@@ -248,9 +241,12 @@ sysc_return:
sysc_tif: sysc_tif:
tm __PT_PSW+1(%r11),0x01 # returning to user ? tm __PT_PSW+1(%r11),0x01 # returning to user ?
jno sysc_restore jno sysc_restore
tm __TI_flags+7(%r12),_TIF_WORK_SVC tm __PT_FLAGS+7(%r11),_PIF_WORK
jnz sysc_work
tm __TI_flags+7(%r12),_TIF_WORK
jnz sysc_work # check for work jnz sysc_work # check for work
ni __TI_flags+7(%r12),255-_TIF_SYSCALL tm __LC_CPU_FLAGS+7,_CIF_WORK
jnz sysc_work
sysc_restore: sysc_restore:
lg %r14,__LC_VDSO_PER_CPU lg %r14,__LC_VDSO_PER_CPU
lmg %r0,%r10,__PT_R0(%r11) lmg %r0,%r10,__PT_R0(%r11)
...@@ -265,17 +261,17 @@ sysc_done: ...@@ -265,17 +261,17 @@ sysc_done:
# One of the work bits is on. Find out which one. # One of the work bits is on. Find out which one.
# #
sysc_work: sysc_work:
tm __TI_flags+7(%r12),_TIF_MCCK_PENDING tm __LC_CPU_FLAGS+7,_CIF_MCCK_PENDING
jo sysc_mcck_pending jo sysc_mcck_pending
tm __TI_flags+7(%r12),_TIF_NEED_RESCHED tm __TI_flags+7(%r12),_TIF_NEED_RESCHED
jo sysc_reschedule jo sysc_reschedule
tm __TI_flags+7(%r12),_TIF_PER_TRAP tm __PT_FLAGS+7(%r11),_PIF_PER_TRAP
jo sysc_singlestep jo sysc_singlestep
tm __TI_flags+7(%r12),_TIF_SIGPENDING tm __TI_flags+7(%r12),_TIF_SIGPENDING
jo sysc_sigpending jo sysc_sigpending
tm __TI_flags+7(%r12),_TIF_NOTIFY_RESUME tm __TI_flags+7(%r12),_TIF_NOTIFY_RESUME
jo sysc_notify_resume jo sysc_notify_resume
tm __TI_flags+7(%r12),_TIF_ASCE tm __LC_CPU_FLAGS+7,_CIF_ASCE
jo sysc_uaccess jo sysc_uaccess
j sysc_return # beware of critical section cleanup j sysc_return # beware of critical section cleanup
...@@ -287,17 +283,17 @@ sysc_reschedule: ...@@ -287,17 +283,17 @@ sysc_reschedule:
jg schedule jg schedule
# #
# _TIF_MCCK_PENDING is set, call handler # _CIF_MCCK_PENDING is set, call handler
# #
sysc_mcck_pending: sysc_mcck_pending:
larl %r14,sysc_return larl %r14,sysc_return
jg s390_handle_mcck # TIF bit will be cleared by handler jg s390_handle_mcck # TIF bit will be cleared by handler
# #
# _TIF_ASCE is set, load user space asce # _CIF_ASCE is set, load user space asce
# #
sysc_uaccess: sysc_uaccess:
ni __TI_flags+7(%r12),255-_TIF_ASCE ni __LC_CPU_FLAGS+7,255-_CIF_ASCE
lctlg %c1,%c1,__LC_USER_ASCE # load primary asce lctlg %c1,%c1,__LC_USER_ASCE # load primary asce
j sysc_return j sysc_return
...@@ -307,7 +303,7 @@ sysc_uaccess: ...@@ -307,7 +303,7 @@ sysc_uaccess:
sysc_sigpending: sysc_sigpending:
lgr %r2,%r11 # pass pointer to pt_regs lgr %r2,%r11 # pass pointer to pt_regs
brasl %r14,do_signal brasl %r14,do_signal
tm __TI_flags+7(%r12),_TIF_SYSCALL tm __PT_FLAGS+7(%r11),_PIF_SYSCALL
jno sysc_return jno sysc_return
lmg %r2,%r7,__PT_R2(%r11) # load svc arguments lmg %r2,%r7,__PT_R2(%r11) # load svc arguments
lg %r10,__TI_sysc_table(%r12) # address of system call table lg %r10,__TI_sysc_table(%r12) # address of system call table
...@@ -327,10 +323,10 @@ sysc_notify_resume: ...@@ -327,10 +323,10 @@ sysc_notify_resume:
jg do_notify_resume jg do_notify_resume
# #
# _TIF_PER_TRAP is set, call do_per_trap # _PIF_PER_TRAP is set, call do_per_trap
# #
sysc_singlestep: sysc_singlestep:
ni __TI_flags+7(%r12),255-_TIF_PER_TRAP ni __PT_FLAGS+7(%r11),255-_PIF_PER_TRAP
lgr %r2,%r11 # pass pointer to pt_regs lgr %r2,%r11 # pass pointer to pt_regs
larl %r14,sysc_return larl %r14,sysc_return
jg do_per_trap jg do_per_trap
...@@ -357,7 +353,7 @@ sysc_tracego: ...@@ -357,7 +353,7 @@ sysc_tracego:
basr %r14,%r9 # call sys_xxx basr %r14,%r9 # call sys_xxx
stg %r2,__PT_R2(%r11) # store return value stg %r2,__PT_R2(%r11) # store return value
sysc_tracenogo: sysc_tracenogo:
tm __TI_flags+6(%r12),_TIF_TRACE >> 8 tm __TI_flags+7(%r12),_TIF_TRACE
jz sysc_return jz sysc_return
lgr %r2,%r11 # pass pointer to pt_regs lgr %r2,%r11 # pass pointer to pt_regs
larl %r14,sysc_return larl %r14,sysc_return
...@@ -416,12 +412,13 @@ ENTRY(pgm_check_handler) ...@@ -416,12 +412,13 @@ ENTRY(pgm_check_handler)
stmg %r8,%r9,__PT_PSW(%r11) stmg %r8,%r9,__PT_PSW(%r11)
mvc __PT_INT_CODE(4,%r11),__LC_PGM_ILC mvc __PT_INT_CODE(4,%r11),__LC_PGM_ILC
mvc __PT_INT_PARM_LONG(8,%r11),__LC_TRANS_EXC_CODE mvc __PT_INT_PARM_LONG(8,%r11),__LC_TRANS_EXC_CODE
xc __PT_FLAGS(8,%r11),__PT_FLAGS(%r11)
stg %r10,__PT_ARGS(%r11) stg %r10,__PT_ARGS(%r11)
tm __LC_PGM_ILC+3,0x80 # check for per exception tm __LC_PGM_ILC+3,0x80 # check for per exception
jz 0f jz 0f
tmhh %r8,0x0001 # kernel per event ? tmhh %r8,0x0001 # kernel per event ?
jz pgm_kprobe jz pgm_kprobe
oi __TI_flags+7(%r12),_TIF_PER_TRAP oi __PT_FLAGS+7(%r11),_PIF_PER_TRAP
mvc __THREAD_per_address(8,%r14),__LC_PER_ADDRESS mvc __THREAD_per_address(8,%r14),__LC_PER_ADDRESS
mvc __THREAD_per_cause(2,%r14),__LC_PER_CAUSE mvc __THREAD_per_cause(2,%r14),__LC_PER_CAUSE
mvc __THREAD_per_paid(1,%r14),__LC_PER_PAID mvc __THREAD_per_paid(1,%r14),__LC_PER_PAID
...@@ -451,10 +448,10 @@ pgm_kprobe: ...@@ -451,10 +448,10 @@ pgm_kprobe:
# single stepped system call # single stepped system call
# #
pgm_svcper: pgm_svcper:
oi __TI_flags+7(%r12),_TIF_PER_TRAP
mvc __LC_RETURN_PSW(8),__LC_SVC_NEW_PSW mvc __LC_RETURN_PSW(8),__LC_SVC_NEW_PSW
larl %r14,sysc_per larl %r14,sysc_per
stg %r14,__LC_RETURN_PSW+8 stg %r14,__LC_RETURN_PSW+8
lghi %r14,_PIF_SYSCALL | _PIF_PER_TRAP
lpswe __LC_RETURN_PSW # branch to sysc_per and enable irqs lpswe __LC_RETURN_PSW # branch to sysc_per and enable irqs
/* /*
...@@ -479,6 +476,7 @@ io_skip: ...@@ -479,6 +476,7 @@ io_skip:
mvc __PT_R8(64,%r11),__LC_SAVE_AREA_ASYNC mvc __PT_R8(64,%r11),__LC_SAVE_AREA_ASYNC
stmg %r8,%r9,__PT_PSW(%r11) stmg %r8,%r9,__PT_PSW(%r11)
mvc __PT_INT_CODE(12,%r11),__LC_SUBCHANNEL_ID mvc __PT_INT_CODE(12,%r11),__LC_SUBCHANNEL_ID
xc __PT_FLAGS(8,%r11),__PT_FLAGS(%r11)
TRACE_IRQS_OFF TRACE_IRQS_OFF
xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15) xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
io_loop: io_loop:
...@@ -499,8 +497,10 @@ io_return: ...@@ -499,8 +497,10 @@ io_return:
LOCKDEP_SYS_EXIT LOCKDEP_SYS_EXIT
TRACE_IRQS_ON TRACE_IRQS_ON
io_tif: io_tif:
tm __TI_flags+7(%r12),_TIF_WORK_INT tm __TI_flags+7(%r12),_TIF_WORK
jnz io_work # there is work to do (signals etc.) jnz io_work # there is work to do (signals etc.)
tm __LC_CPU_FLAGS+7,_CIF_WORK
jnz io_work
io_restore: io_restore:
lg %r14,__LC_VDSO_PER_CPU lg %r14,__LC_VDSO_PER_CPU
lmg %r0,%r10,__PT_R0(%r11) lmg %r0,%r10,__PT_R0(%r11)
...@@ -513,7 +513,7 @@ io_done: ...@@ -513,7 +513,7 @@ io_done:
# #
# There is work todo, find out in which context we have been interrupted: # There is work todo, find out in which context we have been interrupted:
# 1) if we return to user space we can do all _TIF_WORK_INT work # 1) if we return to user space we can do all _TIF_WORK work
# 2) if we return to kernel code and kvm is enabled check if we need to # 2) if we return to kernel code and kvm is enabled check if we need to
# modify the psw to leave SIE # modify the psw to leave SIE
# 3) if we return to kernel code and preemptive scheduling is enabled check # 3) if we return to kernel code and preemptive scheduling is enabled check
...@@ -557,11 +557,9 @@ io_work_user: ...@@ -557,11 +557,9 @@ io_work_user:
# #
# One of the work bits is on. Find out which one. # One of the work bits is on. Find out which one.
# Checked are: _TIF_SIGPENDING, _TIF_NOTIFY_RESUME, _TIF_NEED_RESCHED
# and _TIF_MCCK_PENDING
# #
io_work_tif: io_work_tif:
tm __TI_flags+7(%r12),_TIF_MCCK_PENDING tm __LC_CPU_FLAGS+7,_CIF_MCCK_PENDING
jo io_mcck_pending jo io_mcck_pending
tm __TI_flags+7(%r12),_TIF_NEED_RESCHED tm __TI_flags+7(%r12),_TIF_NEED_RESCHED
jo io_reschedule jo io_reschedule
...@@ -569,12 +567,12 @@ io_work_tif: ...@@ -569,12 +567,12 @@ io_work_tif:
jo io_sigpending jo io_sigpending
tm __TI_flags+7(%r12),_TIF_NOTIFY_RESUME tm __TI_flags+7(%r12),_TIF_NOTIFY_RESUME
jo io_notify_resume jo io_notify_resume
tm __TI_flags+7(%r12),_TIF_ASCE tm __LC_CPU_FLAGS+7,_CIF_ASCE
jo io_uaccess jo io_uaccess
j io_return # beware of critical section cleanup j io_return # beware of critical section cleanup
# #
# _TIF_MCCK_PENDING is set, call handler # _CIF_MCCK_PENDING is set, call handler
# #
io_mcck_pending: io_mcck_pending:
# TRACE_IRQS_ON already done at io_return # TRACE_IRQS_ON already done at io_return
...@@ -583,10 +581,10 @@ io_mcck_pending: ...@@ -583,10 +581,10 @@ io_mcck_pending:
j io_return j io_return
# #
# _TIF_ASCE is set, load user space asce # _CIF_ASCE is set, load user space asce
# #
io_uaccess: io_uaccess:
ni __TI_flags+7(%r12),255-_TIF_ASCE ni __LC_CPU_FLAGS+7,255-_CIF_ASCE
lctlg %c1,%c1,__LC_USER_ASCE # load primary asce lctlg %c1,%c1,__LC_USER_ASCE # load primary asce
j io_return j io_return
...@@ -650,6 +648,7 @@ ext_skip: ...@@ -650,6 +648,7 @@ ext_skip:
mvc __PT_INT_CODE(4,%r11),__LC_EXT_CPU_ADDR mvc __PT_INT_CODE(4,%r11),__LC_EXT_CPU_ADDR
mvc __PT_INT_PARM(4,%r11),__LC_EXT_PARAMS mvc __PT_INT_PARM(4,%r11),__LC_EXT_PARAMS
mvc __PT_INT_PARM_LONG(8,%r11),0(%r1) mvc __PT_INT_PARM_LONG(8,%r11),0(%r1)
xc __PT_FLAGS(8,%r11),__PT_FLAGS(%r11)
TRACE_IRQS_OFF TRACE_IRQS_OFF
xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15) xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
lgr %r2,%r11 # pass pointer to pt_regs lgr %r2,%r11 # pass pointer to pt_regs
...@@ -716,6 +715,7 @@ mcck_skip: ...@@ -716,6 +715,7 @@ mcck_skip:
stmg %r0,%r7,__PT_R0(%r11) stmg %r0,%r7,__PT_R0(%r11)
mvc __PT_R8(64,%r11),0(%r14) mvc __PT_R8(64,%r11),0(%r14)
stmg %r8,%r9,__PT_PSW(%r11) stmg %r8,%r9,__PT_PSW(%r11)
xc __PT_FLAGS(8,%r11),__PT_FLAGS(%r11)
xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15) xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
lgr %r2,%r11 # pass pointer to pt_regs lgr %r2,%r11 # pass pointer to pt_regs
brasl %r14,s390_do_machine_check brasl %r14,s390_do_machine_check
...@@ -727,7 +727,7 @@ mcck_skip: ...@@ -727,7 +727,7 @@ mcck_skip:
la %r11,STACK_FRAME_OVERHEAD(%r1) la %r11,STACK_FRAME_OVERHEAD(%r1)
lgr %r15,%r1 lgr %r15,%r1
ssm __LC_PGM_NEW_PSW # turn dat on, keep irqs off ssm __LC_PGM_NEW_PSW # turn dat on, keep irqs off
tm __TI_flags+7(%r12),_TIF_MCCK_PENDING tm __LC_CPU_FLAGS+7,_CIF_MCCK_PENDING
jno mcck_return jno mcck_return
TRACE_IRQS_OFF TRACE_IRQS_OFF
brasl %r14,s390_handle_mcck brasl %r14,s390_handle_mcck
...@@ -884,6 +884,8 @@ cleanup_system_call: ...@@ -884,6 +884,8 @@ cleanup_system_call:
stmg %r0,%r7,__PT_R0(%r9) stmg %r0,%r7,__PT_R0(%r9)
mvc __PT_PSW(16,%r9),__LC_SVC_OLD_PSW mvc __PT_PSW(16,%r9),__LC_SVC_OLD_PSW
mvc __PT_INT_CODE(4,%r9),__LC_SVC_ILC mvc __PT_INT_CODE(4,%r9),__LC_SVC_ILC
xc __PT_FLAGS(8,%r9),__PT_FLAGS(%r9)
mvi __PT_FLAGS+7(%r9),_PIF_SYSCALL
# setup saved register r15 # setup saved register r15
stg %r15,56(%r11) # r15 stack pointer stg %r15,56(%r11) # r15 stack pointer
# set new psw address and exit # set new psw address and exit
......
...@@ -437,13 +437,13 @@ ENTRY(startup_kdump) ...@@ -437,13 +437,13 @@ ENTRY(startup_kdump)
#if defined(CONFIG_64BIT) #if defined(CONFIG_64BIT)
#if defined(CONFIG_MARCH_ZEC12) #if defined(CONFIG_MARCH_ZEC12)
.long 3, 0xc100efe3, 0xf46ce800, 0x00400000 .long 3, 0xc100efea, 0xf46ce800, 0x00400000
#elif defined(CONFIG_MARCH_Z196) #elif defined(CONFIG_MARCH_Z196)
.long 2, 0xc100efe3, 0xf46c0000 .long 2, 0xc100efea, 0xf46c0000
#elif defined(CONFIG_MARCH_Z10) #elif defined(CONFIG_MARCH_Z10)
.long 2, 0xc100efe3, 0xf0680000 .long 2, 0xc100efea, 0xf0680000
#elif defined(CONFIG_MARCH_Z9_109) #elif defined(CONFIG_MARCH_Z9_109)
.long 1, 0xc100efc3 .long 1, 0xc100efc2
#elif defined(CONFIG_MARCH_Z990) #elif defined(CONFIG_MARCH_Z990)
.long 1, 0xc0002000 .long 1, 0xc0002000
#elif defined(CONFIG_MARCH_Z900) #elif defined(CONFIG_MARCH_Z900)
......
...@@ -59,7 +59,6 @@ ENTRY(startup_continue) ...@@ -59,7 +59,6 @@ ENTRY(startup_continue)
.long 0 # cr13: home space segment table .long 0 # cr13: home space segment table
.long 0xc0000000 # cr14: machine check handling off .long 0xc0000000 # cr14: machine check handling off
.long 0 # cr15: linkage stack operations .long 0 # cr15: linkage stack operations
.Lmchunk:.long memory_chunk
.Lbss_bgn: .long __bss_start .Lbss_bgn: .long __bss_start
.Lbss_end: .long _end .Lbss_end: .long _end
.Lparmaddr: .long PARMAREA .Lparmaddr: .long PARMAREA
......
...@@ -55,7 +55,7 @@ void s390_handle_mcck(void) ...@@ -55,7 +55,7 @@ void s390_handle_mcck(void)
local_mcck_disable(); local_mcck_disable();
mcck = __get_cpu_var(cpu_mcck); mcck = __get_cpu_var(cpu_mcck);
memset(&__get_cpu_var(cpu_mcck), 0, sizeof(struct mcck_struct)); memset(&__get_cpu_var(cpu_mcck), 0, sizeof(struct mcck_struct));
clear_thread_flag(TIF_MCCK_PENDING); clear_cpu_flag(CIF_MCCK_PENDING);
local_mcck_enable(); local_mcck_enable();
local_irq_restore(flags); local_irq_restore(flags);
...@@ -313,7 +313,7 @@ void notrace s390_do_machine_check(struct pt_regs *regs) ...@@ -313,7 +313,7 @@ void notrace s390_do_machine_check(struct pt_regs *regs)
*/ */
mcck->kill_task = 1; mcck->kill_task = 1;
mcck->mcck_code = *(unsigned long long *) mci; mcck->mcck_code = *(unsigned long long *) mci;
set_thread_flag(TIF_MCCK_PENDING); set_cpu_flag(CIF_MCCK_PENDING);
} else { } else {
/* /*
* Couldn't restore all register contents while in * Couldn't restore all register contents while in
...@@ -352,12 +352,12 @@ void notrace s390_do_machine_check(struct pt_regs *regs) ...@@ -352,12 +352,12 @@ void notrace s390_do_machine_check(struct pt_regs *regs)
if (mci->cp) { if (mci->cp) {
/* Channel report word pending */ /* Channel report word pending */
mcck->channel_report = 1; mcck->channel_report = 1;
set_thread_flag(TIF_MCCK_PENDING); set_cpu_flag(CIF_MCCK_PENDING);
} }
if (mci->w) { if (mci->w) {
/* Warning pending */ /* Warning pending */
mcck->warning = 1; mcck->warning = 1;
set_thread_flag(TIF_MCCK_PENDING); set_cpu_flag(CIF_MCCK_PENDING);
} }
nmi_exit(); nmi_exit();
} }
......
...@@ -64,7 +64,7 @@ unsigned long thread_saved_pc(struct task_struct *tsk) ...@@ -64,7 +64,7 @@ unsigned long thread_saved_pc(struct task_struct *tsk)
void arch_cpu_idle(void) void arch_cpu_idle(void)
{ {
local_mcck_disable(); local_mcck_disable();
if (test_thread_flag(TIF_MCCK_PENDING)) { if (test_cpu_flag(CIF_MCCK_PENDING)) {
local_mcck_enable(); local_mcck_enable();
local_irq_enable(); local_irq_enable();
return; return;
...@@ -76,7 +76,7 @@ void arch_cpu_idle(void) ...@@ -76,7 +76,7 @@ void arch_cpu_idle(void)
void arch_cpu_idle_exit(void) void arch_cpu_idle_exit(void)
{ {
if (test_thread_flag(TIF_MCCK_PENDING)) if (test_cpu_flag(CIF_MCCK_PENDING))
s390_handle_mcck(); s390_handle_mcck();
} }
...@@ -123,7 +123,6 @@ int copy_thread(unsigned long clone_flags, unsigned long new_stackp, ...@@ -123,7 +123,6 @@ int copy_thread(unsigned long clone_flags, unsigned long new_stackp,
memset(&p->thread.per_user, 0, sizeof(p->thread.per_user)); memset(&p->thread.per_user, 0, sizeof(p->thread.per_user));
memset(&p->thread.per_event, 0, sizeof(p->thread.per_event)); memset(&p->thread.per_event, 0, sizeof(p->thread.per_event));
clear_tsk_thread_flag(p, TIF_SINGLE_STEP); clear_tsk_thread_flag(p, TIF_SINGLE_STEP);
clear_tsk_thread_flag(p, TIF_PER_TRAP);
/* Initialize per thread user and system timer values */ /* Initialize per thread user and system timer values */
ti = task_thread_info(p); ti = task_thread_info(p);
ti->user_timer = 0; ti->user_timer = 0;
...@@ -152,6 +151,7 @@ int copy_thread(unsigned long clone_flags, unsigned long new_stackp, ...@@ -152,6 +151,7 @@ int copy_thread(unsigned long clone_flags, unsigned long new_stackp,
} }
frame->childregs = *current_pt_regs(); frame->childregs = *current_pt_regs();
frame->childregs.gprs[2] = 0; /* child returns 0 on fork. */ frame->childregs.gprs[2] = 0; /* child returns 0 on fork. */
frame->childregs.flags = 0;
if (new_stackp) if (new_stackp)
frame->childregs.gprs[15] = new_stackp; frame->childregs.gprs[15] = new_stackp;
......
...@@ -136,7 +136,7 @@ void ptrace_disable(struct task_struct *task) ...@@ -136,7 +136,7 @@ void ptrace_disable(struct task_struct *task)
memset(&task->thread.per_user, 0, sizeof(task->thread.per_user)); memset(&task->thread.per_user, 0, sizeof(task->thread.per_user));
memset(&task->thread.per_event, 0, sizeof(task->thread.per_event)); memset(&task->thread.per_event, 0, sizeof(task->thread.per_event));
clear_tsk_thread_flag(task, TIF_SINGLE_STEP); clear_tsk_thread_flag(task, TIF_SINGLE_STEP);
clear_tsk_thread_flag(task, TIF_PER_TRAP); clear_pt_regs_flag(task_pt_regs(task), PIF_PER_TRAP);
task->thread.per_flags = 0; task->thread.per_flags = 0;
} }
...@@ -813,7 +813,7 @@ asmlinkage long do_syscall_trace_enter(struct pt_regs *regs) ...@@ -813,7 +813,7 @@ asmlinkage long do_syscall_trace_enter(struct pt_regs *regs)
* debugger stored an invalid system call number. Skip * debugger stored an invalid system call number. Skip
* the system call and the system call restart handling. * the system call and the system call restart handling.
*/ */
clear_thread_flag(TIF_SYSCALL); clear_pt_regs_flag(regs, PIF_SYSCALL);
ret = -1; ret = -1;
} }
......
This diff is collapsed.
...@@ -113,7 +113,7 @@ static int restore_sigregs(struct pt_regs *regs, _sigregs __user *sregs) ...@@ -113,7 +113,7 @@ static int restore_sigregs(struct pt_regs *regs, _sigregs __user *sregs)
sizeof(current->thread.fp_regs)); sizeof(current->thread.fp_regs));
restore_fp_regs(current->thread.fp_regs.fprs); restore_fp_regs(current->thread.fp_regs.fprs);
clear_thread_flag(TIF_SYSCALL); /* No longer in a system call */ clear_pt_regs_flag(regs, PIF_SYSCALL); /* No longer in a system call */
return 0; return 0;
} }
...@@ -356,7 +356,7 @@ void do_signal(struct pt_regs *regs) ...@@ -356,7 +356,7 @@ void do_signal(struct pt_regs *regs)
* call information. * call information.
*/ */
current_thread_info()->system_call = current_thread_info()->system_call =
test_thread_flag(TIF_SYSCALL) ? regs->int_code : 0; test_pt_regs_flag(regs, PIF_SYSCALL) ? regs->int_code : 0;
signr = get_signal_to_deliver(&info, &ka, regs, NULL); signr = get_signal_to_deliver(&info, &ka, regs, NULL);
if (signr > 0) { if (signr > 0) {
...@@ -384,7 +384,7 @@ void do_signal(struct pt_regs *regs) ...@@ -384,7 +384,7 @@ void do_signal(struct pt_regs *regs)
} }
} }
/* No longer in a system call */ /* No longer in a system call */
clear_thread_flag(TIF_SYSCALL); clear_pt_regs_flag(regs, PIF_SYSCALL);
if (is_compat_task()) if (is_compat_task())
handle_signal32(signr, &ka, &info, oldset, regs); handle_signal32(signr, &ka, &info, oldset, regs);
...@@ -394,7 +394,7 @@ void do_signal(struct pt_regs *regs) ...@@ -394,7 +394,7 @@ void do_signal(struct pt_regs *regs)
} }
/* No handlers present - check for system call restart */ /* No handlers present - check for system call restart */
clear_thread_flag(TIF_SYSCALL); clear_pt_regs_flag(regs, PIF_SYSCALL);
if (current_thread_info()->system_call) { if (current_thread_info()->system_call) {
regs->int_code = current_thread_info()->system_call; regs->int_code = current_thread_info()->system_call;
switch (regs->gprs[2]) { switch (regs->gprs[2]) {
...@@ -407,9 +407,9 @@ void do_signal(struct pt_regs *regs) ...@@ -407,9 +407,9 @@ void do_signal(struct pt_regs *regs)
case -ERESTARTNOINTR: case -ERESTARTNOINTR:
/* Restart system call with magic TIF bit. */ /* Restart system call with magic TIF bit. */
regs->gprs[2] = regs->orig_gpr2; regs->gprs[2] = regs->orig_gpr2;
set_thread_flag(TIF_SYSCALL); set_pt_regs_flag(regs, PIF_SYSCALL);
if (test_thread_flag(TIF_SINGLE_STEP)) if (test_thread_flag(TIF_SINGLE_STEP))
set_thread_flag(TIF_PER_TRAP); clear_pt_regs_flag(regs, PIF_PER_TRAP);
break; break;
} }
} }
......
...@@ -170,6 +170,7 @@ static int pcpu_alloc_lowcore(struct pcpu *pcpu, int cpu) ...@@ -170,6 +170,7 @@ static int pcpu_alloc_lowcore(struct pcpu *pcpu, int cpu)
lc->panic_stack = pcpu->panic_stack + PAGE_SIZE lc->panic_stack = pcpu->panic_stack + PAGE_SIZE
- STACK_FRAME_OVERHEAD - sizeof(struct pt_regs); - STACK_FRAME_OVERHEAD - sizeof(struct pt_regs);
lc->cpu_nr = cpu; lc->cpu_nr = cpu;
lc->spinlock_lockval = arch_spin_lockval(cpu);
#ifndef CONFIG_64BIT #ifndef CONFIG_64BIT
if (MACHINE_HAS_IEEE) { if (MACHINE_HAS_IEEE) {
lc->extended_save_area_addr = get_zeroed_page(GFP_KERNEL); lc->extended_save_area_addr = get_zeroed_page(GFP_KERNEL);
...@@ -226,6 +227,7 @@ static void pcpu_prepare_secondary(struct pcpu *pcpu, int cpu) ...@@ -226,6 +227,7 @@ static void pcpu_prepare_secondary(struct pcpu *pcpu, int cpu)
cpumask_set_cpu(cpu, mm_cpumask(&init_mm)); cpumask_set_cpu(cpu, mm_cpumask(&init_mm));
atomic_inc(&init_mm.context.attach_count); atomic_inc(&init_mm.context.attach_count);
lc->cpu_nr = cpu; lc->cpu_nr = cpu;
lc->spinlock_lockval = arch_spin_lockval(cpu);
lc->percpu_offset = __per_cpu_offset[cpu]; lc->percpu_offset = __per_cpu_offset[cpu];
lc->kernel_asce = S390_lowcore.kernel_asce; lc->kernel_asce = S390_lowcore.kernel_asce;
lc->machine_flags = S390_lowcore.machine_flags; lc->machine_flags = S390_lowcore.machine_flags;
...@@ -402,15 +404,6 @@ void smp_send_stop(void) ...@@ -402,15 +404,6 @@ void smp_send_stop(void)
} }
} }
/*
* Stop the current cpu.
*/
void smp_stop_cpu(void)
{
pcpu_sigp_retry(pcpu_devices + smp_processor_id(), SIGP_STOP, 0);
for (;;) ;
}
/* /*
* This is the main routine where commands issued by other * This is the main routine where commands issued by other
* cpus are handled. * cpus are handled.
...@@ -519,7 +512,7 @@ void smp_ctl_clear_bit(int cr, int bit) ...@@ -519,7 +512,7 @@ void smp_ctl_clear_bit(int cr, int bit)
} }
EXPORT_SYMBOL(smp_ctl_clear_bit); EXPORT_SYMBOL(smp_ctl_clear_bit);
#if defined(CONFIG_ZFCPDUMP) || defined(CONFIG_CRASH_DUMP) #ifdef CONFIG_CRASH_DUMP
static void __init smp_get_save_area(int cpu, u16 address) static void __init smp_get_save_area(int cpu, u16 address)
{ {
...@@ -534,14 +527,12 @@ static void __init smp_get_save_area(int cpu, u16 address) ...@@ -534,14 +527,12 @@ static void __init smp_get_save_area(int cpu, u16 address)
save_area = dump_save_area_create(cpu); save_area = dump_save_area_create(cpu);
if (!save_area) if (!save_area)
panic("could not allocate memory for save area\n"); panic("could not allocate memory for save area\n");
#ifdef CONFIG_CRASH_DUMP
if (address == boot_cpu_address) { if (address == boot_cpu_address) {
/* Copy the registers of the boot cpu. */ /* Copy the registers of the boot cpu. */
copy_oldmem_page(1, (void *) save_area, sizeof(*save_area), copy_oldmem_page(1, (void *) save_area, sizeof(*save_area),
SAVE_AREA_BASE - PAGE_SIZE, 0); SAVE_AREA_BASE - PAGE_SIZE, 0);
return; return;
} }
#endif
/* Get the registers of a non-boot cpu. */ /* Get the registers of a non-boot cpu. */
__pcpu_sigp_relax(address, SIGP_STOP_AND_STORE_STATUS, 0, NULL); __pcpu_sigp_relax(address, SIGP_STOP_AND_STORE_STATUS, 0, NULL);
memcpy_real(save_area, lc + SAVE_AREA_BASE, sizeof(*save_area)); memcpy_real(save_area, lc + SAVE_AREA_BASE, sizeof(*save_area));
...@@ -558,11 +549,11 @@ int smp_store_status(int cpu) ...@@ -558,11 +549,11 @@ int smp_store_status(int cpu)
return 0; return 0;
} }
#else /* CONFIG_ZFCPDUMP || CONFIG_CRASH_DUMP */ #else /* CONFIG_CRASH_DUMP */
static inline void smp_get_save_area(int cpu, u16 address) { } static inline void smp_get_save_area(int cpu, u16 address) { }
#endif /* CONFIG_ZFCPDUMP || CONFIG_CRASH_DUMP */ #endif /* CONFIG_CRASH_DUMP */
void smp_cpu_set_polarization(int cpu, int val) void smp_cpu_set_polarization(int cpu, int val)
{ {
...@@ -809,6 +800,7 @@ void __init smp_cpus_done(unsigned int max_cpus) ...@@ -809,6 +800,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
void __init smp_setup_processor_id(void) void __init smp_setup_processor_id(void)
{ {
S390_lowcore.cpu_nr = 0; S390_lowcore.cpu_nr = 0;
S390_lowcore.spinlock_lockval = arch_spin_lockval(0);
} }
/* /*
......
...@@ -226,7 +226,7 @@ void update_vsyscall(struct timekeeper *tk) ...@@ -226,7 +226,7 @@ void update_vsyscall(struct timekeeper *tk)
vdso_data->wtom_clock_sec = vdso_data->wtom_clock_sec =
tk->xtime_sec + tk->wall_to_monotonic.tv_sec; tk->xtime_sec + tk->wall_to_monotonic.tv_sec;
vdso_data->wtom_clock_nsec = tk->xtime_nsec + vdso_data->wtom_clock_nsec = tk->xtime_nsec +
+ (tk->wall_to_monotonic.tv_nsec << tk->shift); + ((u64) tk->wall_to_monotonic.tv_nsec << tk->shift);
nsecps = (u64) NSEC_PER_SEC << tk->shift; nsecps = (u64) NSEC_PER_SEC << tk->shift;
while (vdso_data->wtom_clock_nsec >= nsecps) { while (vdso_data->wtom_clock_nsec >= nsecps) {
vdso_data->wtom_clock_nsec -= nsecps; vdso_data->wtom_clock_nsec -= nsecps;
......
...@@ -333,7 +333,9 @@ static void __init alloc_masks(struct sysinfo_15_1_x *info, ...@@ -333,7 +333,9 @@ static void __init alloc_masks(struct sysinfo_15_1_x *info,
nr_masks *= info->mag[TOPOLOGY_NR_MAG - offset - 1 - i]; nr_masks *= info->mag[TOPOLOGY_NR_MAG - offset - 1 - i];
nr_masks = max(nr_masks, 1); nr_masks = max(nr_masks, 1);
for (i = 0; i < nr_masks; i++) { for (i = 0; i < nr_masks; i++) {
mask->next = alloc_bootmem(sizeof(struct mask_info)); mask->next = alloc_bootmem_align(
roundup_pow_of_two(sizeof(struct mask_info)),
roundup_pow_of_two(sizeof(struct mask_info)));
mask = mask->next; mask = mask->next;
} }
} }
......
...@@ -907,7 +907,7 @@ static int vcpu_pre_run(struct kvm_vcpu *vcpu) ...@@ -907,7 +907,7 @@ static int vcpu_pre_run(struct kvm_vcpu *vcpu)
if (need_resched()) if (need_resched())
schedule(); schedule();
if (test_thread_flag(TIF_MCCK_PENDING)) if (test_cpu_flag(CIF_MCCK_PENDING))
s390_handle_mcck(); s390_handle_mcck();
if (!kvm_is_ucontrol(vcpu->kvm)) if (!kvm_is_ucontrol(vcpu->kvm))
......
...@@ -26,83 +26,81 @@ __setup("spin_retry=", spin_retry_setup); ...@@ -26,83 +26,81 @@ __setup("spin_retry=", spin_retry_setup);
void arch_spin_lock_wait(arch_spinlock_t *lp) void arch_spin_lock_wait(arch_spinlock_t *lp)
{ {
int count = spin_retry; unsigned int cpu = SPINLOCK_LOCKVAL;
unsigned int cpu = ~smp_processor_id();
unsigned int owner; unsigned int owner;
int count;
while (1) { while (1) {
owner = lp->owner_cpu; owner = ACCESS_ONCE(lp->lock);
if (!owner || smp_vcpu_scheduled(~owner)) { /* Try to get the lock if it is free. */
for (count = spin_retry; count > 0; count--) { if (!owner) {
if (arch_spin_is_locked(lp)) if (_raw_compare_and_swap(&lp->lock, 0, cpu))
continue; return;
if (_raw_compare_and_swap(&lp->owner_cpu, 0, continue;
cpu) == 0)
return;
}
if (MACHINE_IS_LPAR)
continue;
} }
owner = lp->owner_cpu; /* Check if the lock owner is running. */
if (owner) if (!smp_vcpu_scheduled(~owner)) {
smp_yield_cpu(~owner);
continue;
}
/* Loop for a while on the lock value. */
count = spin_retry;
do {
owner = ACCESS_ONCE(lp->lock);
} while (owner && count-- > 0);
if (!owner)
continue;
/*
* For multiple layers of hypervisors, e.g. z/VM + LPAR
* yield the CPU if the lock is still unavailable.
*/
if (!MACHINE_IS_LPAR)
smp_yield_cpu(~owner); smp_yield_cpu(~owner);
if (_raw_compare_and_swap(&lp->owner_cpu, 0, cpu) == 0)
return;
} }
} }
EXPORT_SYMBOL(arch_spin_lock_wait); EXPORT_SYMBOL(arch_spin_lock_wait);
void arch_spin_lock_wait_flags(arch_spinlock_t *lp, unsigned long flags) void arch_spin_lock_wait_flags(arch_spinlock_t *lp, unsigned long flags)
{ {
int count = spin_retry; unsigned int cpu = SPINLOCK_LOCKVAL;
unsigned int cpu = ~smp_processor_id();
unsigned int owner; unsigned int owner;
int count;
local_irq_restore(flags); local_irq_restore(flags);
while (1) { while (1) {
owner = lp->owner_cpu; owner = ACCESS_ONCE(lp->lock);
if (!owner || smp_vcpu_scheduled(~owner)) { /* Try to get the lock if it is free. */
for (count = spin_retry; count > 0; count--) { if (!owner) {
if (arch_spin_is_locked(lp)) local_irq_disable();
continue; if (_raw_compare_and_swap(&lp->lock, 0, cpu))
local_irq_disable(); return;
if (_raw_compare_and_swap(&lp->owner_cpu, 0, local_irq_restore(flags);
cpu) == 0)
return;
local_irq_restore(flags);
}
if (MACHINE_IS_LPAR)
continue;
} }
owner = lp->owner_cpu; /* Check if the lock owner is running. */
if (owner) if (!smp_vcpu_scheduled(~owner)) {
smp_yield_cpu(~owner); smp_yield_cpu(~owner);
local_irq_disable();
if (_raw_compare_and_swap(&lp->owner_cpu, 0, cpu) == 0)
return;
local_irq_restore(flags);
}
}
EXPORT_SYMBOL(arch_spin_lock_wait_flags);
int arch_spin_trylock_retry(arch_spinlock_t *lp)
{
unsigned int cpu = ~smp_processor_id();
int count;
for (count = spin_retry; count > 0; count--) {
if (arch_spin_is_locked(lp))
continue; continue;
if (_raw_compare_and_swap(&lp->owner_cpu, 0, cpu) == 0) }
return 1; /* Loop for a while on the lock value. */
count = spin_retry;
do {
owner = ACCESS_ONCE(lp->lock);
} while (owner && count-- > 0);
if (!owner)
continue;
/*
* For multiple layers of hypervisors, e.g. z/VM + LPAR
* yield the CPU if the lock is still unavailable.
*/
if (!MACHINE_IS_LPAR)
smp_yield_cpu(~owner);
} }
return 0;
} }
EXPORT_SYMBOL(arch_spin_trylock_retry); EXPORT_SYMBOL(arch_spin_lock_wait_flags);
void arch_spin_relax(arch_spinlock_t *lock) void arch_spin_relax(arch_spinlock_t *lp)
{ {
unsigned int cpu = lock->owner_cpu; unsigned int cpu = lp->lock;
if (cpu != 0) { if (cpu != 0) {
if (MACHINE_IS_VM || MACHINE_IS_KVM || if (MACHINE_IS_VM || MACHINE_IS_KVM ||
!smp_vcpu_scheduled(~cpu)) !smp_vcpu_scheduled(~cpu))
...@@ -111,6 +109,17 @@ void arch_spin_relax(arch_spinlock_t *lock) ...@@ -111,6 +109,17 @@ void arch_spin_relax(arch_spinlock_t *lock)
} }
EXPORT_SYMBOL(arch_spin_relax); EXPORT_SYMBOL(arch_spin_relax);
int arch_spin_trylock_retry(arch_spinlock_t *lp)
{
int count;
for (count = spin_retry; count > 0; count--)
if (arch_spin_trylock_once(lp))
return 1;
return 0;
}
EXPORT_SYMBOL(arch_spin_trylock_retry);
void _raw_read_lock_wait(arch_rwlock_t *rw) void _raw_read_lock_wait(arch_rwlock_t *rw)
{ {
unsigned int old; unsigned int old;
...@@ -121,10 +130,10 @@ void _raw_read_lock_wait(arch_rwlock_t *rw) ...@@ -121,10 +130,10 @@ void _raw_read_lock_wait(arch_rwlock_t *rw)
smp_yield(); smp_yield();
count = spin_retry; count = spin_retry;
} }
if (!arch_read_can_lock(rw)) old = ACCESS_ONCE(rw->lock);
if ((int) old < 0)
continue; continue;
old = rw->lock & 0x7fffffffU; if (_raw_compare_and_swap(&rw->lock, old, old + 1))
if (_raw_compare_and_swap(&rw->lock, old, old + 1) == old)
return; return;
} }
} }
...@@ -141,12 +150,13 @@ void _raw_read_lock_wait_flags(arch_rwlock_t *rw, unsigned long flags) ...@@ -141,12 +150,13 @@ void _raw_read_lock_wait_flags(arch_rwlock_t *rw, unsigned long flags)
smp_yield(); smp_yield();
count = spin_retry; count = spin_retry;
} }
if (!arch_read_can_lock(rw)) old = ACCESS_ONCE(rw->lock);
if ((int) old < 0)
continue; continue;
old = rw->lock & 0x7fffffffU;
local_irq_disable(); local_irq_disable();
if (_raw_compare_and_swap(&rw->lock, old, old + 1) == old) if (_raw_compare_and_swap(&rw->lock, old, old + 1))
return; return;
local_irq_restore(flags);
} }
} }
EXPORT_SYMBOL(_raw_read_lock_wait_flags); EXPORT_SYMBOL(_raw_read_lock_wait_flags);
...@@ -157,10 +167,10 @@ int _raw_read_trylock_retry(arch_rwlock_t *rw) ...@@ -157,10 +167,10 @@ int _raw_read_trylock_retry(arch_rwlock_t *rw)
int count = spin_retry; int count = spin_retry;
while (count-- > 0) { while (count-- > 0) {
if (!arch_read_can_lock(rw)) old = ACCESS_ONCE(rw->lock);
if ((int) old < 0)
continue; continue;
old = rw->lock & 0x7fffffffU; if (_raw_compare_and_swap(&rw->lock, old, old + 1))
if (_raw_compare_and_swap(&rw->lock, old, old + 1) == old)
return 1; return 1;
} }
return 0; return 0;
...@@ -169,6 +179,7 @@ EXPORT_SYMBOL(_raw_read_trylock_retry); ...@@ -169,6 +179,7 @@ EXPORT_SYMBOL(_raw_read_trylock_retry);
void _raw_write_lock_wait(arch_rwlock_t *rw) void _raw_write_lock_wait(arch_rwlock_t *rw)
{ {
unsigned int old;
int count = spin_retry; int count = spin_retry;
while (1) { while (1) {
...@@ -176,9 +187,10 @@ void _raw_write_lock_wait(arch_rwlock_t *rw) ...@@ -176,9 +187,10 @@ void _raw_write_lock_wait(arch_rwlock_t *rw)
smp_yield(); smp_yield();
count = spin_retry; count = spin_retry;
} }
if (!arch_write_can_lock(rw)) old = ACCESS_ONCE(rw->lock);
if (old)
continue; continue;
if (_raw_compare_and_swap(&rw->lock, 0, 0x80000000) == 0) if (_raw_compare_and_swap(&rw->lock, 0, 0x80000000))
return; return;
} }
} }
...@@ -186,6 +198,7 @@ EXPORT_SYMBOL(_raw_write_lock_wait); ...@@ -186,6 +198,7 @@ EXPORT_SYMBOL(_raw_write_lock_wait);
void _raw_write_lock_wait_flags(arch_rwlock_t *rw, unsigned long flags) void _raw_write_lock_wait_flags(arch_rwlock_t *rw, unsigned long flags)
{ {
unsigned int old;
int count = spin_retry; int count = spin_retry;
local_irq_restore(flags); local_irq_restore(flags);
...@@ -194,23 +207,27 @@ void _raw_write_lock_wait_flags(arch_rwlock_t *rw, unsigned long flags) ...@@ -194,23 +207,27 @@ void _raw_write_lock_wait_flags(arch_rwlock_t *rw, unsigned long flags)
smp_yield(); smp_yield();
count = spin_retry; count = spin_retry;
} }
if (!arch_write_can_lock(rw)) old = ACCESS_ONCE(rw->lock);
if (old)
continue; continue;
local_irq_disable(); local_irq_disable();
if (_raw_compare_and_swap(&rw->lock, 0, 0x80000000) == 0) if (_raw_compare_and_swap(&rw->lock, 0, 0x80000000))
return; return;
local_irq_restore(flags);
} }
} }
EXPORT_SYMBOL(_raw_write_lock_wait_flags); EXPORT_SYMBOL(_raw_write_lock_wait_flags);
int _raw_write_trylock_retry(arch_rwlock_t *rw) int _raw_write_trylock_retry(arch_rwlock_t *rw)
{ {
unsigned int old;
int count = spin_retry; int count = spin_retry;
while (count-- > 0) { while (count-- > 0) {
if (!arch_write_can_lock(rw)) old = ACCESS_ONCE(rw->lock);
if (old)
continue; continue;
if (_raw_compare_and_swap(&rw->lock, 0, 0x80000000) == 0) if (_raw_compare_and_swap(&rw->lock, 0, 0x80000000))
return 1; return 1;
} }
return 0; return 0;
......
...@@ -76,7 +76,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr, ...@@ -76,7 +76,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
{ {
unsigned long tmp1, tmp2; unsigned long tmp1, tmp2;
update_primary_asce(current); load_kernel_asce();
tmp1 = -256UL; tmp1 = -256UL;
asm volatile( asm volatile(
" sacf 0\n" " sacf 0\n"
...@@ -159,7 +159,7 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x, ...@@ -159,7 +159,7 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
{ {
unsigned long tmp1, tmp2; unsigned long tmp1, tmp2;
update_primary_asce(current); load_kernel_asce();
tmp1 = -256UL; tmp1 = -256UL;
asm volatile( asm volatile(
" sacf 0\n" " sacf 0\n"
...@@ -225,7 +225,7 @@ static inline unsigned long copy_in_user_mvc(void __user *to, const void __user ...@@ -225,7 +225,7 @@ static inline unsigned long copy_in_user_mvc(void __user *to, const void __user
{ {
unsigned long tmp1; unsigned long tmp1;
update_primary_asce(current); load_kernel_asce();
asm volatile( asm volatile(
" sacf 256\n" " sacf 256\n"
" "AHI" %0,-1\n" " "AHI" %0,-1\n"
...@@ -292,7 +292,7 @@ static inline unsigned long clear_user_xc(void __user *to, unsigned long size) ...@@ -292,7 +292,7 @@ static inline unsigned long clear_user_xc(void __user *to, unsigned long size)
{ {
unsigned long tmp1, tmp2; unsigned long tmp1, tmp2;
update_primary_asce(current); load_kernel_asce();
asm volatile( asm volatile(
" sacf 256\n" " sacf 256\n"
" "AHI" %0,-1\n" " "AHI" %0,-1\n"
...@@ -358,7 +358,7 @@ unsigned long __strnlen_user(const char __user *src, unsigned long size) ...@@ -358,7 +358,7 @@ unsigned long __strnlen_user(const char __user *src, unsigned long size)
{ {
if (unlikely(!size)) if (unlikely(!size))
return 0; return 0;
update_primary_asce(current); load_kernel_asce();
return strnlen_user_srst(src, size); return strnlen_user_srst(src, size);
} }
EXPORT_SYMBOL(__strnlen_user); EXPORT_SYMBOL(__strnlen_user);
......
...@@ -415,7 +415,7 @@ static inline int do_exception(struct pt_regs *regs, int access) ...@@ -415,7 +415,7 @@ static inline int do_exception(struct pt_regs *regs, int access)
* The instruction that caused the program check has * The instruction that caused the program check has
* been nullified. Don't signal single step via SIGTRAP. * been nullified. Don't signal single step via SIGTRAP.
*/ */
clear_tsk_thread_flag(tsk, TIF_PER_TRAP); clear_pt_regs_flag(regs, PIF_PER_TRAP);
if (notify_page_fault(regs)) if (notify_page_fault(regs))
return 0; return 0;
......
...@@ -6,130 +6,60 @@ ...@@ -6,130 +6,60 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/memblock.h>
#include <linux/init.h>
#include <linux/debugfs.h>
#include <linux/seq_file.h>
#include <asm/ipl.h> #include <asm/ipl.h>
#include <asm/sclp.h> #include <asm/sclp.h>
#include <asm/setup.h> #include <asm/setup.h>
#define ADDR2G (1ULL << 31) #define ADDR2G (1ULL << 31)
static void find_memory_chunks(struct mem_chunk chunk[], unsigned long maxsize) #define CHUNK_READ_WRITE 0
#define CHUNK_READ_ONLY 1
static inline void memblock_physmem_add(phys_addr_t start, phys_addr_t size)
{
memblock_add_range(&memblock.memory, start, size, 0, 0);
memblock_add_range(&memblock.physmem, start, size, 0, 0);
}
void __init detect_memory_memblock(void)
{ {
unsigned long long memsize, rnmax, rzm; unsigned long long memsize, rnmax, rzm;
unsigned long addr = 0, size; unsigned long addr, size;
int i = 0, type; int type;
rzm = sclp_get_rzm(); rzm = sclp_get_rzm();
rnmax = sclp_get_rnmax(); rnmax = sclp_get_rnmax();
memsize = rzm * rnmax; memsize = rzm * rnmax;
if (!rzm) if (!rzm)
rzm = 1ULL << 17; rzm = 1ULL << 17;
if (sizeof(long) == 4) { if (IS_ENABLED(CONFIG_32BIT)) {
rzm = min(ADDR2G, rzm); rzm = min(ADDR2G, rzm);
memsize = memsize ? min(ADDR2G, memsize) : ADDR2G; memsize = min(ADDR2G, memsize);
} }
if (maxsize) max_physmem_end = memsize;
memsize = memsize ? min((unsigned long)memsize, maxsize) : maxsize; addr = 0;
/* keep memblock lists close to the kernel */
memblock_set_bottom_up(true);
do { do {
size = 0; size = 0;
type = tprot(addr); type = tprot(addr);
do { do {
size += rzm; size += rzm;
if (memsize && addr + size >= memsize) if (max_physmem_end && addr + size >= max_physmem_end)
break; break;
} while (type == tprot(addr + size)); } while (type == tprot(addr + size));
if (type == CHUNK_READ_WRITE || type == CHUNK_READ_ONLY) { if (type == CHUNK_READ_WRITE || type == CHUNK_READ_ONLY) {
if (memsize && (addr + size > memsize)) if (max_physmem_end && (addr + size > max_physmem_end))
size = memsize - addr; size = max_physmem_end - addr;
chunk[i].addr = addr; memblock_physmem_add(addr, size);
chunk[i].size = size;
chunk[i].type = type;
i++;
} }
addr += size; addr += size;
} while (addr < memsize && i < MEMORY_CHUNKS); } while (addr < max_physmem_end);
} memblock_set_bottom_up(false);
if (!max_physmem_end)
/** max_physmem_end = memblock_end_of_DRAM();
* detect_memory_layout - fill mem_chunk array with memory layout data
* @chunk: mem_chunk array to be filled
* @maxsize: maximum address where memory detection should stop
*
* Fills the passed in memory chunk array with the memory layout of the
* machine. The array must have a size of at least MEMORY_CHUNKS and will
* be fully initialized afterwards.
* If the maxsize paramater has a value > 0 memory detection will stop at
* that address. It is guaranteed that all chunks have an ending address
* that is smaller than maxsize.
* If maxsize is 0 all memory will be detected.
*/
void detect_memory_layout(struct mem_chunk chunk[], unsigned long maxsize)
{
unsigned long flags, flags_dat, cr0;
memset(chunk, 0, MEMORY_CHUNKS * sizeof(struct mem_chunk));
/*
* Disable IRQs, DAT and low address protection so tprot does the
* right thing and we don't get scheduled away with low address
* protection disabled.
*/
local_irq_save(flags);
flags_dat = __arch_local_irq_stnsm(0xfb);
/*
* In case DAT was enabled, make sure chunk doesn't reside in vmalloc
* space. We have disabled DAT and any access to vmalloc area will
* cause an exception.
* If DAT was disabled we are called from early ipl code.
*/
if (test_bit(5, &flags_dat)) {
if (WARN_ON_ONCE(is_vmalloc_or_module_addr(chunk)))
goto out;
}
__ctl_store(cr0, 0, 0);
__ctl_clear_bit(0, 28);
find_memory_chunks(chunk, maxsize);
__ctl_load(cr0, 0, 0);
out:
__arch_local_irq_ssm(flags_dat);
local_irq_restore(flags);
}
EXPORT_SYMBOL(detect_memory_layout);
/*
* Create memory hole with given address and size.
*/
void create_mem_hole(struct mem_chunk mem_chunk[], unsigned long addr,
unsigned long size)
{
int i;
for (i = 0; i < MEMORY_CHUNKS; i++) {
struct mem_chunk *chunk = &mem_chunk[i];
if (chunk->size == 0)
continue;
if (addr > chunk->addr + chunk->size)
continue;
if (addr + size <= chunk->addr)
continue;
/* Split */
if ((addr > chunk->addr) &&
(addr + size < chunk->addr + chunk->size)) {
struct mem_chunk *new = chunk + 1;
memmove(new, chunk, (MEMORY_CHUNKS-i-1) * sizeof(*new));
new->addr = addr + size;
new->size = chunk->addr + chunk->size - new->addr;
chunk->size = addr - chunk->addr;
continue;
} else if ((addr <= chunk->addr) &&
(addr + size >= chunk->addr + chunk->size)) {
memmove(chunk, chunk + 1, (MEMORY_CHUNKS-i-1) * sizeof(*chunk));
memset(&mem_chunk[MEMORY_CHUNKS-1], 0, sizeof(*chunk));
} else if (addr + size < chunk->addr + chunk->size) {
chunk->size = chunk->addr + chunk->size - addr - size;
chunk->addr = addr + size;
} else if (addr > chunk->addr) {
chunk->size = addr - chunk->addr;
}
}
} }
...@@ -12,8 +12,6 @@ ...@@ -12,8 +12,6 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/init.h> #include <linux/init.h>
#include <asm/setup.h>
#include <asm/ipl.h>
#define ESSA_SET_STABLE 1 #define ESSA_SET_STABLE 1
#define ESSA_SET_UNUSED 2 #define ESSA_SET_UNUSED 2
...@@ -43,14 +41,6 @@ void __init cmma_init(void) ...@@ -43,14 +41,6 @@ void __init cmma_init(void)
if (!cmma_flag) if (!cmma_flag)
return; return;
/*
* Disable CMM for dump, otherwise the tprot based memory
* detection can fail because of unstable pages.
*/
if (OLDMEM_BASE || ipl_info.type == IPL_TYPE_FCP_DUMP) {
cmma_flag = 0;
return;
}
asm volatile( asm volatile(
" .insn rrf,0xb9ab0000,%1,%1,0,0\n" " .insn rrf,0xb9ab0000,%1,%1,0,0\n"
"0: la %0,0\n" "0: la %0,0\n"
......
...@@ -53,8 +53,10 @@ static void __crst_table_upgrade(void *arg) ...@@ -53,8 +53,10 @@ static void __crst_table_upgrade(void *arg)
{ {
struct mm_struct *mm = arg; struct mm_struct *mm = arg;
if (current->active_mm == mm) if (current->active_mm == mm) {
update_user_asce(mm, 1); clear_user_asce();
set_user_asce(mm);
}
__tlb_flush_local(); __tlb_flush_local();
} }
...@@ -108,7 +110,7 @@ void crst_table_downgrade(struct mm_struct *mm, unsigned long limit) ...@@ -108,7 +110,7 @@ void crst_table_downgrade(struct mm_struct *mm, unsigned long limit)
pgd_t *pgd; pgd_t *pgd;
if (current->active_mm == mm) { if (current->active_mm == mm) {
clear_user_asce(mm, 1); clear_user_asce();
__tlb_flush_mm(mm); __tlb_flush_mm(mm);
} }
while (mm->context.asce_limit > limit) { while (mm->context.asce_limit > limit) {
...@@ -134,7 +136,7 @@ void crst_table_downgrade(struct mm_struct *mm, unsigned long limit) ...@@ -134,7 +136,7 @@ void crst_table_downgrade(struct mm_struct *mm, unsigned long limit)
crst_table_free(mm, (unsigned long *) pgd); crst_table_free(mm, (unsigned long *) pgd);
} }
if (current->active_mm == mm) if (current->active_mm == mm)
update_user_asce(mm, 1); set_user_asce(mm);
} }
#endif #endif
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#include <linux/list.h> #include <linux/list.h>
#include <linux/hugetlb.h> #include <linux/hugetlb.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/memblock.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/setup.h> #include <asm/setup.h>
...@@ -66,7 +67,8 @@ static pte_t __ref *vmem_pte_alloc(unsigned long address) ...@@ -66,7 +67,8 @@ static pte_t __ref *vmem_pte_alloc(unsigned long address)
if (slab_is_available()) if (slab_is_available())
pte = (pte_t *) page_table_alloc(&init_mm, address); pte = (pte_t *) page_table_alloc(&init_mm, address);
else else
pte = alloc_bootmem(PTRS_PER_PTE * sizeof(pte_t)); pte = alloc_bootmem_align(PTRS_PER_PTE * sizeof(pte_t),
PTRS_PER_PTE * sizeof(pte_t));
if (!pte) if (!pte)
return NULL; return NULL;
clear_table((unsigned long *) pte, _PAGE_INVALID, clear_table((unsigned long *) pte, _PAGE_INVALID,
...@@ -371,16 +373,14 @@ int vmem_add_mapping(unsigned long start, unsigned long size) ...@@ -371,16 +373,14 @@ int vmem_add_mapping(unsigned long start, unsigned long size)
void __init vmem_map_init(void) void __init vmem_map_init(void)
{ {
unsigned long ro_start, ro_end; unsigned long ro_start, ro_end;
unsigned long start, end; struct memblock_region *reg;
int i; phys_addr_t start, end;
ro_start = PFN_ALIGN((unsigned long)&_stext); ro_start = PFN_ALIGN((unsigned long)&_stext);
ro_end = (unsigned long)&_eshared & PAGE_MASK; ro_end = (unsigned long)&_eshared & PAGE_MASK;
for (i = 0; i < MEMORY_CHUNKS; i++) { for_each_memblock(memory, reg) {
if (!memory_chunk[i].size) start = reg->base;
continue; end = reg->base + reg->size - 1;
start = memory_chunk[i].addr;
end = memory_chunk[i].addr + memory_chunk[i].size;
if (start >= ro_end || end <= ro_start) if (start >= ro_end || end <= ro_start)
vmem_add_mem(start, end - start, 0); vmem_add_mem(start, end - start, 0);
else if (start >= ro_start && end <= ro_end) else if (start >= ro_start && end <= ro_end)
...@@ -400,23 +400,21 @@ void __init vmem_map_init(void) ...@@ -400,23 +400,21 @@ void __init vmem_map_init(void)
} }
/* /*
* Convert memory chunk array to a memory segment list so there is a single * Convert memblock.memory to a memory segment list so there is a single
* list that contains both r/w memory and shared memory segments. * list that contains all memory segments.
*/ */
static int __init vmem_convert_memory_chunk(void) static int __init vmem_convert_memory_chunk(void)
{ {
struct memblock_region *reg;
struct memory_segment *seg; struct memory_segment *seg;
int i;
mutex_lock(&vmem_mutex); mutex_lock(&vmem_mutex);
for (i = 0; i < MEMORY_CHUNKS; i++) { for_each_memblock(memory, reg) {
if (!memory_chunk[i].size)
continue;
seg = kzalloc(sizeof(*seg), GFP_KERNEL); seg = kzalloc(sizeof(*seg), GFP_KERNEL);
if (!seg) if (!seg)
panic("Out of memory...\n"); panic("Out of memory...\n");
seg->start = memory_chunk[i].addr; seg->start = reg->base;
seg->size = memory_chunk[i].size; seg->size = reg->size;
insert_memory_segment(seg); insert_memory_segment(seg);
} }
mutex_unlock(&vmem_mutex); mutex_unlock(&vmem_mutex);
......
...@@ -209,13 +209,11 @@ static void init_all_cpu_buffers(void) ...@@ -209,13 +209,11 @@ static void init_all_cpu_buffers(void)
} }
} }
static int prepare_cpu_buffers(void) static void prepare_cpu_buffers(void)
{ {
int cpu;
int rc;
struct hws_cpu_buffer *cb; struct hws_cpu_buffer *cb;
int cpu;
rc = 0;
for_each_online_cpu(cpu) { for_each_online_cpu(cpu) {
cb = &per_cpu(sampler_cpu_buffer, cpu); cb = &per_cpu(sampler_cpu_buffer, cpu);
atomic_set(&cb->ext_params, 0); atomic_set(&cb->ext_params, 0);
...@@ -230,8 +228,6 @@ static int prepare_cpu_buffers(void) ...@@ -230,8 +228,6 @@ static int prepare_cpu_buffers(void)
cb->oom = 0; cb->oom = 0;
cb->stop_mode = 0; cb->stop_mode = 0;
} }
return rc;
} }
/* /*
...@@ -1107,9 +1103,7 @@ int hwsampler_start_all(unsigned long rate) ...@@ -1107,9 +1103,7 @@ int hwsampler_start_all(unsigned long rate)
if (rc) if (rc)
goto start_all_exit; goto start_all_exit;
rc = prepare_cpu_buffers(); prepare_cpu_buffers();
if (rc)
goto start_all_exit;
for_each_online_cpu(cpu) { for_each_online_cpu(cpu) {
rc = start_sampling(cpu); rc = start_sampling(cpu);
...@@ -1156,7 +1150,7 @@ int hwsampler_stop_all(void) ...@@ -1156,7 +1150,7 @@ int hwsampler_stop_all(void)
rc = 0; rc = 0;
if (hws_state == HWS_INIT) { if (hws_state == HWS_INIT) {
mutex_unlock(&hws_sem); mutex_unlock(&hws_sem);
return rc; return 0;
} }
hws_state = HWS_STOPPING; hws_state = HWS_STOPPING;
mutex_unlock(&hws_sem); mutex_unlock(&hws_sem);
......
...@@ -114,6 +114,16 @@ static int clp_store_query_pci_fn(struct zpci_dev *zdev, ...@@ -114,6 +114,16 @@ static int clp_store_query_pci_fn(struct zpci_dev *zdev,
zdev->end_dma = response->edma; zdev->end_dma = response->edma;
zdev->pchid = response->pchid; zdev->pchid = response->pchid;
zdev->pfgid = response->pfgid; zdev->pfgid = response->pfgid;
zdev->pft = response->pft;
zdev->vfn = response->vfn;
zdev->uid = response->uid;
memcpy(zdev->pfip, response->pfip, sizeof(zdev->pfip));
if (response->util_str_avail) {
memcpy(zdev->util_str, response->util_str,
sizeof(zdev->util_str));
}
return 0; return 0;
} }
......
...@@ -76,7 +76,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf) ...@@ -76,7 +76,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
switch (ccdf->pec) { switch (ccdf->pec) {
case 0x0301: /* Standby -> Configured */ case 0x0301: /* Standby -> Configured */
if (!zdev || zdev->state == ZPCI_FN_STATE_CONFIGURED) if (!zdev || zdev->state != ZPCI_FN_STATE_STANDBY)
break; break;
zdev->state = ZPCI_FN_STATE_CONFIGURED; zdev->state = ZPCI_FN_STATE_CONFIGURED;
zdev->fh = ccdf->fh; zdev->fh = ccdf->fh;
...@@ -86,7 +86,8 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf) ...@@ -86,7 +86,8 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
pci_rescan_bus(zdev->bus); pci_rescan_bus(zdev->bus);
break; break;
case 0x0302: /* Reserved -> Standby */ case 0x0302: /* Reserved -> Standby */
clp_add_pci_device(ccdf->fid, ccdf->fh, 0); if (!zdev)
clp_add_pci_device(ccdf->fid, ccdf->fh, 0);
break; break;
case 0x0303: /* Deconfiguration requested */ case 0x0303: /* Deconfiguration requested */
if (pdev) if (pdev)
......
...@@ -12,43 +12,29 @@ ...@@ -12,43 +12,29 @@
#include <linux/stat.h> #include <linux/stat.h>
#include <linux/pci.h> #include <linux/pci.h>
static ssize_t show_fid(struct device *dev, struct device_attribute *attr, #define zpci_attr(name, fmt, member) \
char *buf) static ssize_t name##_show(struct device *dev, \
{ struct device_attribute *attr, char *buf) \
struct zpci_dev *zdev = get_zdev(to_pci_dev(dev)); { \
struct zpci_dev *zdev = get_zdev(to_pci_dev(dev)); \
return sprintf(buf, "0x%08x\n", zdev->fid); \
} return sprintf(buf, fmt, zdev->member); \
static DEVICE_ATTR(function_id, S_IRUGO, show_fid, NULL); } \
static DEVICE_ATTR_RO(name)
static ssize_t show_fh(struct device *dev, struct device_attribute *attr,
char *buf) zpci_attr(function_id, "0x%08x\n", fid);
{ zpci_attr(function_handle, "0x%08x\n", fh);
struct zpci_dev *zdev = get_zdev(to_pci_dev(dev)); zpci_attr(pchid, "0x%04x\n", pchid);
zpci_attr(pfgid, "0x%02x\n", pfgid);
return sprintf(buf, "0x%08x\n", zdev->fh); zpci_attr(vfn, "0x%04x\n", vfn);
} zpci_attr(pft, "0x%02x\n", pft);
static DEVICE_ATTR(function_handle, S_IRUGO, show_fh, NULL); zpci_attr(uid, "0x%x\n", uid);
zpci_attr(segment0, "0x%02x\n", pfip[0]);
static ssize_t show_pchid(struct device *dev, struct device_attribute *attr, zpci_attr(segment1, "0x%02x\n", pfip[1]);
char *buf) zpci_attr(segment2, "0x%02x\n", pfip[2]);
{ zpci_attr(segment3, "0x%02x\n", pfip[3]);
struct zpci_dev *zdev = get_zdev(to_pci_dev(dev));
static ssize_t recover_store(struct device *dev, struct device_attribute *attr,
return sprintf(buf, "0x%04x\n", zdev->pchid);
}
static DEVICE_ATTR(pchid, S_IRUGO, show_pchid, NULL);
static ssize_t show_pfgid(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct zpci_dev *zdev = get_zdev(to_pci_dev(dev));
return sprintf(buf, "0x%02x\n", zdev->pfgid);
}
static DEVICE_ATTR(pfgid, S_IRUGO, show_pfgid, NULL);
static ssize_t store_recover(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count) const char *buf, size_t count)
{ {
struct pci_dev *pdev = to_pci_dev(dev); struct pci_dev *pdev = to_pci_dev(dev);
...@@ -70,20 +56,55 @@ static ssize_t store_recover(struct device *dev, struct device_attribute *attr, ...@@ -70,20 +56,55 @@ static ssize_t store_recover(struct device *dev, struct device_attribute *attr,
pci_rescan_bus(zdev->bus); pci_rescan_bus(zdev->bus);
return count; return count;
} }
static DEVICE_ATTR(recover, S_IWUSR, NULL, store_recover); static DEVICE_ATTR_WO(recover);
static ssize_t util_string_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *attr, char *buf,
loff_t off, size_t count)
{
struct device *dev = kobj_to_dev(kobj);
struct pci_dev *pdev = to_pci_dev(dev);
struct zpci_dev *zdev = get_zdev(pdev);
return memory_read_from_buffer(buf, count, &off, zdev->util_str,
sizeof(zdev->util_str));
}
static BIN_ATTR_RO(util_string, CLP_UTIL_STR_LEN);
static struct bin_attribute *zpci_bin_attrs[] = {
&bin_attr_util_string,
NULL,
};
static struct attribute *zpci_dev_attrs[] = { static struct attribute *zpci_dev_attrs[] = {
&dev_attr_function_id.attr, &dev_attr_function_id.attr,
&dev_attr_function_handle.attr, &dev_attr_function_handle.attr,
&dev_attr_pchid.attr, &dev_attr_pchid.attr,
&dev_attr_pfgid.attr, &dev_attr_pfgid.attr,
&dev_attr_pft.attr,
&dev_attr_vfn.attr,
&dev_attr_uid.attr,
&dev_attr_recover.attr, &dev_attr_recover.attr,
NULL, NULL,
}; };
static struct attribute_group zpci_attr_group = { static struct attribute_group zpci_attr_group = {
.attrs = zpci_dev_attrs, .attrs = zpci_dev_attrs,
.bin_attrs = zpci_bin_attrs,
}; };
static struct attribute *pfip_attrs[] = {
&dev_attr_segment0.attr,
&dev_attr_segment1.attr,
&dev_attr_segment2.attr,
&dev_attr_segment3.attr,
NULL,
};
static struct attribute_group pfip_attr_group = {
.name = "pfip",
.attrs = pfip_attrs,
};
const struct attribute_group *zpci_attr_groups[] = { const struct attribute_group *zpci_attr_groups[] = {
&zpci_attr_group, &zpci_attr_group,
&pfip_attr_group,
NULL, NULL,
}; };
...@@ -33,4 +33,4 @@ obj-$(CONFIG_MONWRITER) += monwriter.o ...@@ -33,4 +33,4 @@ obj-$(CONFIG_MONWRITER) += monwriter.o
obj-$(CONFIG_S390_VMUR) += vmur.o obj-$(CONFIG_S390_VMUR) += vmur.o
zcore_mod-objs := sclp_sdias.o zcore.o zcore_mod-objs := sclp_sdias.o zcore.o
obj-$(CONFIG_ZFCPDUMP) += zcore_mod.o obj-$(CONFIG_CRASH_DUMP) += zcore_mod.o
...@@ -17,6 +17,8 @@ ...@@ -17,6 +17,8 @@
#include <linux/miscdevice.h> #include <linux/miscdevice.h>
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/memblock.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/ipl.h> #include <asm/ipl.h>
#include <asm/sclp.h> #include <asm/sclp.h>
...@@ -411,33 +413,24 @@ static ssize_t zcore_memmap_read(struct file *filp, char __user *buf, ...@@ -411,33 +413,24 @@ static ssize_t zcore_memmap_read(struct file *filp, char __user *buf,
size_t count, loff_t *ppos) size_t count, loff_t *ppos)
{ {
return simple_read_from_buffer(buf, count, ppos, filp->private_data, return simple_read_from_buffer(buf, count, ppos, filp->private_data,
MEMORY_CHUNKS * CHUNK_INFO_SIZE); memblock.memory.cnt * CHUNK_INFO_SIZE);
} }
static int zcore_memmap_open(struct inode *inode, struct file *filp) static int zcore_memmap_open(struct inode *inode, struct file *filp)
{ {
int i; struct memblock_region *reg;
char *buf; char *buf;
struct mem_chunk *chunk_array; int i = 0;
chunk_array = kzalloc(MEMORY_CHUNKS * sizeof(struct mem_chunk), buf = kzalloc(memblock.memory.cnt * CHUNK_INFO_SIZE, GFP_KERNEL);
GFP_KERNEL);
if (!chunk_array)
return -ENOMEM;
detect_memory_layout(chunk_array, 0);
buf = kzalloc(MEMORY_CHUNKS * CHUNK_INFO_SIZE, GFP_KERNEL);
if (!buf) { if (!buf) {
kfree(chunk_array);
return -ENOMEM; return -ENOMEM;
} }
for (i = 0; i < MEMORY_CHUNKS; i++) { for_each_memblock(memory, reg) {
sprintf(buf + (i * CHUNK_INFO_SIZE), "%016llx %016llx ", sprintf(buf + (i++ * CHUNK_INFO_SIZE), "%016llx %016llx ",
(unsigned long long) chunk_array[i].addr, (unsigned long long) reg->base,
(unsigned long long) chunk_array[i].size); (unsigned long long) reg->size);
if (chunk_array[i].size == 0)
break;
} }
kfree(chunk_array);
filp->private_data = buf; filp->private_data = buf;
return nonseekable_open(inode, filp); return nonseekable_open(inode, filp);
} }
...@@ -593,21 +586,12 @@ static int __init check_sdias(void) ...@@ -593,21 +586,12 @@ static int __init check_sdias(void)
static int __init get_mem_info(unsigned long *mem, unsigned long *end) static int __init get_mem_info(unsigned long *mem, unsigned long *end)
{ {
int i; struct memblock_region *reg;
struct mem_chunk *chunk_array;
chunk_array = kzalloc(MEMORY_CHUNKS * sizeof(struct mem_chunk), for_each_memblock(memory, reg) {
GFP_KERNEL); *mem += reg->size;
if (!chunk_array) *end = max_t(unsigned long, *end, reg->base + reg->size);
return -ENOMEM;
detect_memory_layout(chunk_array, 0);
for (i = 0; i < MEMORY_CHUNKS; i++) {
if (chunk_array[i].size == 0)
break;
*mem += chunk_array[i].size;
*end = max(*end, chunk_array[i].addr + chunk_array[i].size);
} }
kfree(chunk_array);
return 0; return 0;
} }
......
...@@ -46,7 +46,7 @@ static u16 ccwreq_next_path(struct ccw_device *cdev) ...@@ -46,7 +46,7 @@ static u16 ccwreq_next_path(struct ccw_device *cdev)
goto out; goto out;
} }
req->retries = req->maxretries; req->retries = req->maxretries;
req->mask = lpm_adjust(req->mask >>= 1, req->lpm); req->mask = lpm_adjust(req->mask >> 1, req->lpm);
out: out:
return req->mask; return req->mask;
} }
...@@ -252,7 +252,7 @@ static void ccwreq_log_status(struct ccw_device *cdev, enum io_status status) ...@@ -252,7 +252,7 @@ static void ccwreq_log_status(struct ccw_device *cdev, enum io_status status)
*/ */
void ccw_request_handler(struct ccw_device *cdev) void ccw_request_handler(struct ccw_device *cdev)
{ {
struct irb *irb = (struct irb *)&S390_lowcore.irb; struct irb *irb = &__get_cpu_var(cio_irb);
struct ccw_request *req = &cdev->private->req; struct ccw_request *req = &cdev->private->req;
enum io_status status; enum io_status status;
int rc = -EOPNOTSUPP; int rc = -EOPNOTSUPP;
......
...@@ -509,7 +509,7 @@ int chp_new(struct chp_id chpid) ...@@ -509,7 +509,7 @@ int chp_new(struct chp_id chpid)
* On success return a newly allocated copy of the channel-path description * On success return a newly allocated copy of the channel-path description
* data associated with the given channel-path ID. Return %NULL on error. * data associated with the given channel-path ID. Return %NULL on error.
*/ */
void *chp_get_chp_desc(struct chp_id chpid) struct channel_path_desc *chp_get_chp_desc(struct chp_id chpid)
{ {
struct channel_path *chp; struct channel_path *chp;
struct channel_path_desc *desc; struct channel_path_desc *desc;
......
...@@ -60,7 +60,7 @@ static inline struct channel_path *chpid_to_chp(struct chp_id chpid) ...@@ -60,7 +60,7 @@ static inline struct channel_path *chpid_to_chp(struct chp_id chpid)
int chp_get_status(struct chp_id chpid); int chp_get_status(struct chp_id chpid);
u8 chp_get_sch_opm(struct subchannel *sch); u8 chp_get_sch_opm(struct subchannel *sch);
int chp_is_registered(struct chp_id chpid); int chp_is_registered(struct chp_id chpid);
void *chp_get_chp_desc(struct chp_id chpid); struct channel_path_desc *chp_get_chp_desc(struct chp_id chpid);
void chp_remove_cmg_attr(struct channel_path *chp); void chp_remove_cmg_attr(struct channel_path *chp);
int chp_add_cmg_attr(struct channel_path *chp); int chp_add_cmg_attr(struct channel_path *chp);
int chp_update_desc(struct channel_path *chp); int chp_update_desc(struct channel_path *chp);
......
...@@ -21,17 +21,6 @@ struct cmg_entry { ...@@ -21,17 +21,6 @@ struct cmg_entry {
u32 values[NR_MEASUREMENT_ENTRIES]; u32 values[NR_MEASUREMENT_ENTRIES];
} __attribute__ ((packed)); } __attribute__ ((packed));
struct channel_path_desc {
u8 flags;
u8 lsn;
u8 desc;
u8 chpid;
u8 swla;
u8 zeroes;
u8 chla;
u8 chpp;
} __attribute__ ((packed));
struct channel_path_desc_fmt1 { struct channel_path_desc_fmt1 {
u8 flags; u8 flags;
u8 lsn; u8 lsn;
......
...@@ -58,7 +58,7 @@ static void chsc_subchannel_irq(struct subchannel *sch) ...@@ -58,7 +58,7 @@ static void chsc_subchannel_irq(struct subchannel *sch)
{ {
struct chsc_private *private = dev_get_drvdata(&sch->dev); struct chsc_private *private = dev_get_drvdata(&sch->dev);
struct chsc_request *request = private->request; struct chsc_request *request = private->request;
struct irb *irb = (struct irb *)&S390_lowcore.irb; struct irb *irb = &__get_cpu_var(cio_irb);
CHSC_LOG(4, "irb"); CHSC_LOG(4, "irb");
CHSC_LOG_HEX(4, irb, sizeof(*irb)); CHSC_LOG_HEX(4, irb, sizeof(*irb));
......
...@@ -46,6 +46,9 @@ debug_info_t *cio_debug_msg_id; ...@@ -46,6 +46,9 @@ debug_info_t *cio_debug_msg_id;
debug_info_t *cio_debug_trace_id; debug_info_t *cio_debug_trace_id;
debug_info_t *cio_debug_crw_id; debug_info_t *cio_debug_crw_id;
DEFINE_PER_CPU_ALIGNED(struct irb, cio_irb);
EXPORT_PER_CPU_SYMBOL(cio_irb);
/* /*
* Function: cio_debug_init * Function: cio_debug_init
* Initializes three debug logs for common I/O: * Initializes three debug logs for common I/O:
...@@ -560,7 +563,7 @@ static irqreturn_t do_cio_interrupt(int irq, void *dummy) ...@@ -560,7 +563,7 @@ static irqreturn_t do_cio_interrupt(int irq, void *dummy)
__this_cpu_write(s390_idle.nohz_delay, 1); __this_cpu_write(s390_idle.nohz_delay, 1);
tpi_info = (struct tpi_info *) &get_irq_regs()->int_code; tpi_info = (struct tpi_info *) &get_irq_regs()->int_code;
irb = (struct irb *) &S390_lowcore.irb; irb = &__get_cpu_var(cio_irb);
sch = (struct subchannel *)(unsigned long) tpi_info->intparm; sch = (struct subchannel *)(unsigned long) tpi_info->intparm;
if (!sch) { if (!sch) {
/* Clear pending interrupt condition. */ /* Clear pending interrupt condition. */
...@@ -609,7 +612,7 @@ void cio_tsch(struct subchannel *sch) ...@@ -609,7 +612,7 @@ void cio_tsch(struct subchannel *sch)
struct irb *irb; struct irb *irb;
int irq_context; int irq_context;
irb = (struct irb *)&S390_lowcore.irb; irb = &__get_cpu_var(cio_irb);
/* Store interrupt response block to lowcore. */ /* Store interrupt response block to lowcore. */
if (tsch(sch->schid, irb) != 0) if (tsch(sch->schid, irb) != 0)
/* Not status pending or not operational. */ /* Not status pending or not operational. */
...@@ -746,7 +749,7 @@ __clear_io_subchannel_easy(struct subchannel_id schid) ...@@ -746,7 +749,7 @@ __clear_io_subchannel_easy(struct subchannel_id schid)
struct tpi_info ti; struct tpi_info ti;
if (tpi(&ti)) { if (tpi(&ti)) {
tsch(ti.schid, (struct irb *)&S390_lowcore.irb); tsch(ti.schid, &__get_cpu_var(cio_irb));
if (schid_equal(&ti.schid, &schid)) if (schid_equal(&ti.schid, &schid))
return 0; return 0;
} }
......
...@@ -102,6 +102,8 @@ struct subchannel { ...@@ -102,6 +102,8 @@ struct subchannel {
struct schib_config config; struct schib_config config;
} __attribute__ ((aligned(8))); } __attribute__ ((aligned(8)));
DECLARE_PER_CPU(struct irb, cio_irb);
#define to_subchannel(n) container_of(n, struct subchannel, dev) #define to_subchannel(n) container_of(n, struct subchannel, dev)
extern int cio_validate_subchannel (struct subchannel *, struct subchannel_id); extern int cio_validate_subchannel (struct subchannel *, struct subchannel_id);
......
...@@ -739,7 +739,7 @@ ccw_device_irq(struct ccw_device *cdev, enum dev_event dev_event) ...@@ -739,7 +739,7 @@ ccw_device_irq(struct ccw_device *cdev, enum dev_event dev_event)
struct irb *irb; struct irb *irb;
int is_cmd; int is_cmd;
irb = (struct irb *)&S390_lowcore.irb; irb = &__get_cpu_var(cio_irb);
is_cmd = !scsw_is_tm(&irb->scsw); is_cmd = !scsw_is_tm(&irb->scsw);
/* Check for unsolicited interrupt. */ /* Check for unsolicited interrupt. */
if (!scsw_is_solicited(&irb->scsw)) { if (!scsw_is_solicited(&irb->scsw)) {
...@@ -805,7 +805,7 @@ ccw_device_w4sense(struct ccw_device *cdev, enum dev_event dev_event) ...@@ -805,7 +805,7 @@ ccw_device_w4sense(struct ccw_device *cdev, enum dev_event dev_event)
{ {
struct irb *irb; struct irb *irb;
irb = (struct irb *)&S390_lowcore.irb; irb = &__get_cpu_var(cio_irb);
/* Check for unsolicited interrupt. */ /* Check for unsolicited interrupt. */
if (scsw_stctl(&irb->scsw) == if (scsw_stctl(&irb->scsw) ==
(SCSW_STCTL_STATUS_PEND | SCSW_STCTL_ALERT_STATUS)) { (SCSW_STCTL_STATUS_PEND | SCSW_STCTL_ALERT_STATUS)) {
......
...@@ -563,14 +563,23 @@ int ccw_device_stlck(struct ccw_device *cdev) ...@@ -563,14 +563,23 @@ int ccw_device_stlck(struct ccw_device *cdev)
return rc; return rc;
} }
void *ccw_device_get_chp_desc(struct ccw_device *cdev, int chp_no) /**
* chp_get_chp_desc - return newly allocated channel-path descriptor
* @cdev: device to obtain the descriptor for
* @chp_idx: index of the channel path
*
* On success return a newly allocated copy of the channel-path description
* data associated with the given channel path. Return %NULL on error.
*/
struct channel_path_desc *ccw_device_get_chp_desc(struct ccw_device *cdev,
int chp_idx)
{ {
struct subchannel *sch; struct subchannel *sch;
struct chp_id chpid; struct chp_id chpid;
sch = to_subchannel(cdev->dev.parent); sch = to_subchannel(cdev->dev.parent);
chp_id_init(&chpid); chp_id_init(&chpid);
chpid.id = sch->schib.pmcw.chpid[chp_no]; chpid.id = sch->schib.pmcw.chpid[chp_idx];
return chp_get_chp_desc(chpid); return chp_get_chp_desc(chpid);
} }
......
...@@ -134,7 +134,7 @@ static void eadm_subchannel_irq(struct subchannel *sch) ...@@ -134,7 +134,7 @@ static void eadm_subchannel_irq(struct subchannel *sch)
{ {
struct eadm_private *private = get_eadm_private(sch); struct eadm_private *private = get_eadm_private(sch);
struct eadm_scsw *scsw = &sch->schib.scsw.eadm; struct eadm_scsw *scsw = &sch->schib.scsw.eadm;
struct irb *irb = (struct irb *)&S390_lowcore.irb; struct irb *irb = &__get_cpu_var(cio_irb);
int error = 0; int error = 0;
EADM_LOG(6, "irq"); EADM_LOG(6, "irq");
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <net/iucv/af_iucv.h> #include <net/iucv/af_iucv.h>
#include <asm/ebcdic.h> #include <asm/ebcdic.h>
#include <asm/chpid.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/sysinfo.h> #include <asm/sysinfo.h>
#include <asm/compat.h> #include <asm/compat.h>
...@@ -1344,16 +1345,7 @@ static void qeth_set_multiple_write_queues(struct qeth_card *card) ...@@ -1344,16 +1345,7 @@ static void qeth_set_multiple_write_queues(struct qeth_card *card)
static void qeth_update_from_chp_desc(struct qeth_card *card) static void qeth_update_from_chp_desc(struct qeth_card *card)
{ {
struct ccw_device *ccwdev; struct ccw_device *ccwdev;
struct channelPath_dsc { struct channel_path_desc *chp_dsc;
u8 flags;
u8 lsn;
u8 desc;
u8 chpid;
u8 swla;
u8 zeroes;
u8 chla;
u8 chpp;
} *chp_dsc;
QETH_DBF_TEXT(SETUP, 2, "chp_desc"); QETH_DBF_TEXT(SETUP, 2, "chp_desc");
......
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
#include <linux/mm.h> #include <linux/mm.h>
#define INIT_MEMBLOCK_REGIONS 128 #define INIT_MEMBLOCK_REGIONS 128
#define INIT_PHYSMEM_REGIONS 4
/* Definition of memblock flags. */ /* Definition of memblock flags. */
#define MEMBLOCK_HOTPLUG 0x1 /* hotpluggable region */ #define MEMBLOCK_HOTPLUG 0x1 /* hotpluggable region */
...@@ -43,6 +44,9 @@ struct memblock { ...@@ -43,6 +44,9 @@ struct memblock {
phys_addr_t current_limit; phys_addr_t current_limit;
struct memblock_type memory; struct memblock_type memory;
struct memblock_type reserved; struct memblock_type reserved;
#ifdef CONFIG_HAVE_MEMBLOCK_PHYS_MAP
struct memblock_type physmem;
#endif
}; };
extern struct memblock memblock; extern struct memblock memblock;
...@@ -71,6 +75,63 @@ int memblock_reserve(phys_addr_t base, phys_addr_t size); ...@@ -71,6 +75,63 @@ int memblock_reserve(phys_addr_t base, phys_addr_t size);
void memblock_trim_memory(phys_addr_t align); void memblock_trim_memory(phys_addr_t align);
int memblock_mark_hotplug(phys_addr_t base, phys_addr_t size); int memblock_mark_hotplug(phys_addr_t base, phys_addr_t size);
int memblock_clear_hotplug(phys_addr_t base, phys_addr_t size); int memblock_clear_hotplug(phys_addr_t base, phys_addr_t size);
/* Low level functions */
int memblock_add_range(struct memblock_type *type,
phys_addr_t base, phys_addr_t size,
int nid, unsigned long flags);
int memblock_remove_range(struct memblock_type *type,
phys_addr_t base,
phys_addr_t size);
void __next_mem_range(u64 *idx, int nid, struct memblock_type *type_a,
struct memblock_type *type_b, phys_addr_t *out_start,
phys_addr_t *out_end, int *out_nid);
void __next_mem_range_rev(u64 *idx, int nid, struct memblock_type *type_a,
struct memblock_type *type_b, phys_addr_t *out_start,
phys_addr_t *out_end, int *out_nid);
/**
* for_each_mem_range - iterate through memblock areas from type_a and not
* included in type_b. Or just type_a if type_b is NULL.
* @i: u64 used as loop variable
* @type_a: ptr to memblock_type to iterate
* @type_b: ptr to memblock_type which excludes from the iteration
* @nid: node selector, %NUMA_NO_NODE for all nodes
* @p_start: ptr to phys_addr_t for start address of the range, can be %NULL
* @p_end: ptr to phys_addr_t for end address of the range, can be %NULL
* @p_nid: ptr to int for nid of the range, can be %NULL
*/
#define for_each_mem_range(i, type_a, type_b, nid, \
p_start, p_end, p_nid) \
for (i = 0, __next_mem_range(&i, nid, type_a, type_b, \
p_start, p_end, p_nid); \
i != (u64)ULLONG_MAX; \
__next_mem_range(&i, nid, type_a, type_b, \
p_start, p_end, p_nid))
/**
* for_each_mem_range_rev - reverse iterate through memblock areas from
* type_a and not included in type_b. Or just type_a if type_b is NULL.
* @i: u64 used as loop variable
* @type_a: ptr to memblock_type to iterate
* @type_b: ptr to memblock_type which excludes from the iteration
* @nid: node selector, %NUMA_NO_NODE for all nodes
* @p_start: ptr to phys_addr_t for start address of the range, can be %NULL
* @p_end: ptr to phys_addr_t for end address of the range, can be %NULL
* @p_nid: ptr to int for nid of the range, can be %NULL
*/
#define for_each_mem_range_rev(i, type_a, type_b, nid, \
p_start, p_end, p_nid) \
for (i = (u64)ULLONG_MAX, \
__next_mem_range_rev(&i, nid, type_a, type_b, \
p_start, p_end, p_nid); \
i != (u64)ULLONG_MAX; \
__next_mem_range_rev(&i, nid, type_a, type_b, \
p_start, p_end, p_nid))
#ifdef CONFIG_MOVABLE_NODE #ifdef CONFIG_MOVABLE_NODE
static inline bool memblock_is_hotpluggable(struct memblock_region *m) static inline bool memblock_is_hotpluggable(struct memblock_region *m)
{ {
...@@ -113,9 +174,6 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, ...@@ -113,9 +174,6 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
i >= 0; __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid)) i >= 0; __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid))
#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ #endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
void __next_free_mem_range(u64 *idx, int nid, phys_addr_t *out_start,
phys_addr_t *out_end, int *out_nid);
/** /**
* for_each_free_mem_range - iterate through free memblock areas * for_each_free_mem_range - iterate through free memblock areas
* @i: u64 used as loop variable * @i: u64 used as loop variable
...@@ -128,13 +186,8 @@ void __next_free_mem_range(u64 *idx, int nid, phys_addr_t *out_start, ...@@ -128,13 +186,8 @@ void __next_free_mem_range(u64 *idx, int nid, phys_addr_t *out_start,
* soon as memblock is initialized. * soon as memblock is initialized.
*/ */
#define for_each_free_mem_range(i, nid, p_start, p_end, p_nid) \ #define for_each_free_mem_range(i, nid, p_start, p_end, p_nid) \
for (i = 0, \ for_each_mem_range(i, &memblock.memory, &memblock.reserved, \
__next_free_mem_range(&i, nid, p_start, p_end, p_nid); \ nid, p_start, p_end, p_nid)
i != (u64)ULLONG_MAX; \
__next_free_mem_range(&i, nid, p_start, p_end, p_nid))
void __next_free_mem_range_rev(u64 *idx, int nid, phys_addr_t *out_start,
phys_addr_t *out_end, int *out_nid);
/** /**
* for_each_free_mem_range_reverse - rev-iterate through free memblock areas * for_each_free_mem_range_reverse - rev-iterate through free memblock areas
...@@ -148,10 +201,8 @@ void __next_free_mem_range_rev(u64 *idx, int nid, phys_addr_t *out_start, ...@@ -148,10 +201,8 @@ void __next_free_mem_range_rev(u64 *idx, int nid, phys_addr_t *out_start,
* order. Available as soon as memblock is initialized. * order. Available as soon as memblock is initialized.
*/ */
#define for_each_free_mem_range_reverse(i, nid, p_start, p_end, p_nid) \ #define for_each_free_mem_range_reverse(i, nid, p_start, p_end, p_nid) \
for (i = (u64)ULLONG_MAX, \ for_each_mem_range_rev(i, &memblock.memory, &memblock.reserved, \
__next_free_mem_range_rev(&i, nid, p_start, p_end, p_nid); \ nid, p_start, p_end, p_nid)
i != (u64)ULLONG_MAX; \
__next_free_mem_range_rev(&i, nid, p_start, p_end, p_nid))
static inline void memblock_set_region_flags(struct memblock_region *r, static inline void memblock_set_region_flags(struct memblock_region *r,
unsigned long flags) unsigned long flags)
......
...@@ -134,6 +134,9 @@ config HAVE_MEMBLOCK ...@@ -134,6 +134,9 @@ config HAVE_MEMBLOCK
config HAVE_MEMBLOCK_NODE_MAP config HAVE_MEMBLOCK_NODE_MAP
boolean boolean
config HAVE_MEMBLOCK_PHYS_MAP
boolean
config ARCH_DISCARD_MEMBLOCK config ARCH_DISCARD_MEMBLOCK
boolean boolean
......
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment