Commit 10f3291a authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'akpm' (fixes from Andrew Morton)

Merge patches from Andrew Morton:
 "22 fixes"

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (22 commits)
  kexec: purgatory: add clean-up for purgatory directory
  Documentation/kdump/kdump.txt: add ARM description
  flush_icache_range: export symbol to fix build errors
  tools: selftests: fix build issue with make kselftests target
  ocfs2: quorum: add a log for node not fenced
  ocfs2: o2net: set tcp user timeout to max value
  ocfs2: o2net: don't shutdown connection when idle timeout
  ocfs2: do not write error flag to user structure we cannot copy from/to
  x86/purgatory: use approprate -m64/-32 build flag for arch/x86/purgatory
  drivers/rtc/rtc-s5m.c: re-add support for devices without irq specified
  xattr: fix check for simultaneous glibc header inclusion
  kexec: remove CONFIG_KEXEC dependency on crypto
  kexec: create a new config option CONFIG_KEXEC_FILE for new syscall
  x86,mm: fix pte_special versus pte_numa
  hugetlb_cgroup: use lockdep_assert_held rather than spin_is_locked
  mm/zpool: use prefixed module loading
  zram: fix incorrect stat with failed_reads
  lib: turn CONFIG_STACKTRACE into an actual option.
  mm: actually clear pmd_numa before invalidating
  memblock, memhotplug: fix wrong type in memblock_find_in_range_node().
  ...
parents 878e580e b0108f9e
......@@ -18,7 +18,7 @@ memory image to a dump file on the local disk, or across the network to
a remote system.
Kdump and kexec are currently supported on the x86, x86_64, ppc64, ia64,
and s390x architectures.
s390x and arm architectures.
When the system kernel boots, it reserves a small section of memory for
the dump-capture kernel. This ensures that ongoing Direct Memory Access
......@@ -112,7 +112,7 @@ There are two possible methods of using Kdump.
2) Or use the system kernel binary itself as dump-capture kernel and there is
no need to build a separate dump-capture kernel. This is possible
only with the architectures which support a relocatable kernel. As
of today, i386, x86_64, ppc64 and ia64 architectures support relocatable
of today, i386, x86_64, ppc64, ia64 and arm architectures support relocatable
kernel.
Building a relocatable kernel is advantageous from the point of view that
......@@ -241,6 +241,13 @@ Dump-capture kernel config options (Arch Dependent, ia64)
kernel will be aligned to 64Mb, so if the start address is not then
any space below the alignment point will be wasted.
Dump-capture kernel config options (Arch Dependent, arm)
----------------------------------------------------------
- To use a relocatable kernel,
Enable "AUTO_ZRELADDR" support under "Boot" options:
AUTO_ZRELADDR=y
Extended crashkernel syntax
===========================
......@@ -256,6 +263,10 @@ The syntax is:
crashkernel=<range1>:<size1>[,<range2>:<size2>,...][@offset]
range=start-[end]
Please note, on arm, the offset is required.
crashkernel=<range1>:<size1>[,<range2>:<size2>,...]@offset
range=start-[end]
'start' is inclusive and 'end' is exclusive.
For example:
......@@ -296,6 +307,12 @@ Boot into System Kernel
on the memory consumption of the kdump system. In general this is not
dependent on the memory size of the production system.
On arm, use "crashkernel=Y@X". Note that the start address of the kernel
will be aligned to 128MiB (0x08000000), so if the start address is not then
any space below the alignment point may be overwritten by the dump-capture kernel,
which means it is possible that the vmcore is not that precise as expected.
Load the Dump-capture Kernel
============================
......@@ -315,7 +332,8 @@ For ia64:
- Use vmlinux or vmlinuz.gz
For s390x:
- Use image or bzImage
For arm:
- Use zImage
If you are using a uncompressed vmlinux image then use following command
to load dump-capture kernel.
......@@ -331,6 +349,15 @@ to load dump-capture kernel.
--initrd=<initrd-for-dump-capture-kernel> \
--append="root=<root-dev> <arch-specific-options>"
If you are using a compressed zImage, then use following command
to load dump-capture kernel.
kexec --type zImage -p <dump-capture-kernel-bzImage> \
--initrd=<initrd-for-dump-capture-kernel> \
--dtb=<dtb-for-dump-capture-kernel> \
--append="root=<root-dev> <arch-specific-options>"
Please note, that --args-linux does not need to be specified for ia64.
It is planned to make this a no-op on that architecture, but for now
it should be omitted
......@@ -347,6 +374,9 @@ For ppc64:
For s390x:
"1 maxcpus=1 cgroup_disable=memory"
For arm:
"1 maxcpus=1 reset_devices"
Notes on loading the dump-capture kernel:
* By default, the ELF headers are stored in ELF64 format to support
......
......@@ -581,6 +581,7 @@ void flush_icache_range(unsigned long kstart, unsigned long kend)
tot_sz -= sz;
}
}
EXPORT_SYMBOL(flush_icache_range);
/*
* General purpose helper to make I and D cache lines consistent.
......
......@@ -1983,8 +1983,6 @@ config XIP_PHYS_ADDR
config KEXEC
bool "Kexec system call (EXPERIMENTAL)"
depends on (!SMP || PM_SLEEP_SMP)
select CRYPTO
select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
......
......@@ -68,6 +68,7 @@ void flush_icache_range(unsigned long start, unsigned long end)
);
local_irq_restore(flags);
}
EXPORT_SYMBOL(flush_icache_range);
void hexagon_clean_dcache_range(unsigned long start, unsigned long end)
{
......
......@@ -549,8 +549,6 @@ source "drivers/sn/Kconfig"
config KEXEC
bool "kexec system call"
depends on !IA64_HP_SIM && (!SMP || HOTPLUG_CPU)
select CRYPTO
select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
......
......@@ -91,8 +91,6 @@ config MMU_SUN3
config KEXEC
bool "kexec system call"
depends on M68KCLASSIC
select CRYPTO
select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
......
......@@ -2396,8 +2396,6 @@ source "kernel/Kconfig.preempt"
config KEXEC
bool "Kexec system call"
select CRYPTO
select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
......
......@@ -399,8 +399,6 @@ config PPC64_SUPPORTS_MEMORY_FAILURE
config KEXEC
bool "kexec system call"
depends on (PPC_BOOK3S || FSL_BOOKE || (44x && !SMP))
select CRYPTO
select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
......
......@@ -48,8 +48,6 @@ config ARCH_SUPPORTS_DEBUG_PAGEALLOC
config KEXEC
def_bool y
select CRYPTO
select CRYPTO_SHA256
config AUDIT_ARCH
def_bool y
......
......@@ -598,8 +598,6 @@ source kernel/Kconfig.hz
config KEXEC
bool "kexec system call (EXPERIMENTAL)"
depends on SUPERH32 && MMU
select CRYPTO
select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
......
......@@ -229,6 +229,7 @@ void flush_icache_range(unsigned long start, unsigned long end)
cacheop_on_each_cpu(local_flush_icache_range, (void *)&data, 1);
}
EXPORT_SYMBOL(flush_icache_range);
void flush_icache_page(struct vm_area_struct *vma, struct page *page)
{
......
......@@ -191,8 +191,6 @@ source "kernel/Kconfig.hz"
config KEXEC
bool "kexec system call"
select CRYPTO
select CRYPTO_SHA256
---help---
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
......
......@@ -183,6 +183,7 @@ void flush_icache_range(unsigned long start, unsigned long end)
preempt_enable();
}
}
EXPORT_SYMBOL(flush_icache_range);
/* Called when smp_send_reschedule() triggers IRQ_RESCHEDULE. */
......
......@@ -17,6 +17,4 @@ obj-$(CONFIG_IA32_EMULATION) += ia32/
obj-y += platform/
obj-y += net/
ifeq ($(CONFIG_X86_64),y)
obj-$(CONFIG_KEXEC) += purgatory/
endif
obj-$(CONFIG_KEXEC_FILE) += purgatory/
......@@ -1585,9 +1585,6 @@ source kernel/Kconfig.hz
config KEXEC
bool "kexec system call"
select BUILD_BIN2C
select CRYPTO
select CRYPTO_SHA256
---help---
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
......@@ -1602,9 +1599,22 @@ config KEXEC
interface is strongly in flux, so no good recommendation can be
made.
config KEXEC_FILE
bool "kexec file based system call"
select BUILD_BIN2C
depends on KEXEC
depends on X86_64
depends on CRYPTO=y
depends on CRYPTO_SHA256=y
---help---
This is new version of kexec system call. This system call is
file based and takes file descriptors as system call argument
for kernel and initramfs as opposed to list of segments as
accepted by previous system call.
config KEXEC_VERIFY_SIG
bool "Verify kernel signature during kexec_file_load() syscall"
depends on KEXEC
depends on KEXEC_FILE
---help---
This option makes kernel signature verification mandatory for
kexec_file_load() syscall. If kernel is signature can not be
......
......@@ -184,11 +184,8 @@ archheaders:
$(Q)$(MAKE) $(build)=arch/x86/syscalls all
archprepare:
ifeq ($(CONFIG_KEXEC),y)
# Build only for 64bit. No loaders for 32bit yet.
ifeq ($(CONFIG_X86_64),y)
ifeq ($(CONFIG_KEXEC_FILE),y)
$(Q)$(MAKE) $(build)=arch/x86/purgatory arch/x86/purgatory/kexec-purgatory.c
endif
endif
###
......@@ -254,6 +251,7 @@ archclean:
$(Q)rm -rf $(objtree)/arch/x86_64
$(Q)$(MAKE) $(clean)=$(boot)
$(Q)$(MAKE) $(clean)=arch/x86/tools
$(Q)$(MAKE) $(clean)=arch/x86/purgatory
PHONY += kvmconfig
kvmconfig:
......
......@@ -131,8 +131,13 @@ static inline int pte_exec(pte_t pte)
static inline int pte_special(pte_t pte)
{
return (pte_flags(pte) & (_PAGE_PRESENT|_PAGE_SPECIAL)) ==
(_PAGE_PRESENT|_PAGE_SPECIAL);
/*
* See CONFIG_NUMA_BALANCING pte_numa in include/asm-generic/pgtable.h.
* On x86 we have _PAGE_BIT_NUMA == _PAGE_BIT_GLOBAL+1 ==
* __PAGE_BIT_SOFTW1 == _PAGE_BIT_SPECIAL.
*/
return (pte_flags(pte) & _PAGE_SPECIAL) &&
(pte_flags(pte) & (_PAGE_PRESENT|_PAGE_PROTNONE));
}
static inline unsigned long pte_pfn(pte_t pte)
......
......@@ -71,6 +71,7 @@ obj-$(CONFIG_FTRACE_SYSCALLS) += ftrace.o
obj-$(CONFIG_X86_TSC) += trace_clock.o
obj-$(CONFIG_KEXEC) += machine_kexec_$(BITS).o
obj-$(CONFIG_KEXEC) += relocate_kernel_$(BITS).o crash.o
obj-$(CONFIG_KEXEC_FILE) += kexec-bzimage64.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump_$(BITS).o
obj-y += kprobes/
obj-$(CONFIG_MODULES) += module.o
......@@ -118,5 +119,4 @@ ifeq ($(CONFIG_X86_64),y)
obj-$(CONFIG_PCI_MMCONFIG) += mmconf-fam10h_64.o
obj-y += vsmp_64.o
obj-$(CONFIG_KEXEC) += kexec-bzimage64.o
endif
......@@ -182,8 +182,7 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
crash_save_cpu(regs, safe_smp_processor_id());
}
#ifdef CONFIG_X86_64
#ifdef CONFIG_KEXEC_FILE
static int get_nr_ram_ranges_callback(unsigned long start_pfn,
unsigned long nr_pfn, void *arg)
{
......@@ -696,5 +695,4 @@ int crash_load_segments(struct kimage *image)
return ret;
}
#endif /* CONFIG_X86_64 */
#endif /* CONFIG_KEXEC_FILE */
......@@ -25,9 +25,11 @@
#include <asm/debugreg.h>
#include <asm/kexec-bzimage64.h>
#ifdef CONFIG_KEXEC_FILE
static struct kexec_file_ops *kexec_file_loaders[] = {
&kexec_bzImage64_ops,
};
#endif
static void free_transition_pgtable(struct kimage *image)
{
......@@ -178,6 +180,7 @@ static void load_segments(void)
);
}
#ifdef CONFIG_KEXEC_FILE
/* Update purgatory as needed after various image segments have been prepared */
static int arch_update_purgatory(struct kimage *image)
{
......@@ -209,6 +212,12 @@ static int arch_update_purgatory(struct kimage *image)
return ret;
}
#else /* !CONFIG_KEXEC_FILE */
static inline int arch_update_purgatory(struct kimage *image)
{
return 0;
}
#endif /* CONFIG_KEXEC_FILE */
int machine_kexec_prepare(struct kimage *image)
{
......@@ -329,6 +338,7 @@ void arch_crash_save_vmcoreinfo(void)
/* arch-dependent functionality related to kexec file-based syscall */
#ifdef CONFIG_KEXEC_FILE
int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
unsigned long buf_len)
{
......@@ -522,3 +532,4 @@ int arch_kexec_apply_relocations_add(const Elf64_Ehdr *ehdr,
(int)ELF64_R_TYPE(rel[i].r_info), value);
return -ENOEXEC;
}
#endif /* CONFIG_KEXEC_FILE */
......@@ -11,6 +11,7 @@ targets += purgatory.ro
# sure how to relocate those. Like kexec-tools, use custom flags.
KBUILD_CFLAGS := -fno-strict-aliasing -Wall -Wstrict-prototypes -fno-zero-initialized-in-bss -fno-builtin -ffreestanding -c -MD -Os -mcmodel=large
KBUILD_CFLAGS += -m$(BITS)
$(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE
$(call if_changed,ld)
......@@ -24,7 +25,4 @@ $(obj)/kexec-purgatory.c: $(obj)/purgatory.ro FORCE
$(call if_changed,bin2c)
# No loaders for 32bits yet.
ifeq ($(CONFIG_X86_64),y)
obj-$(CONFIG_KEXEC) += kexec-purgatory.o
endif
obj-$(CONFIG_KEXEC_FILE) += kexec-purgatory.o
......@@ -571,6 +571,7 @@ void flush_icache_range(unsigned long start, unsigned long end)
};
on_each_cpu(ipi_flush_icache_range, &fd, 1);
}
EXPORT_SYMBOL(flush_icache_range);
/* ------------------------------------------------------------------------- */
......
......@@ -378,7 +378,6 @@ static int zram_decompress_page(struct zram *zram, char *mem, u32 index)
/* Should NEVER happen. Return bio error if it does. */
if (unlikely(ret)) {
pr_err("Decompression failed! err=%d, page=%u\n", ret, index);
atomic64_inc(&zram->stats.failed_reads);
return ret;
}
......@@ -547,8 +546,6 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
zcomp_strm_release(zram->comp, zstrm);
if (is_partial_io(bvec))
kfree(uncmem);
if (ret)
atomic64_inc(&zram->stats.failed_writes);
return ret;
}
......@@ -566,6 +563,13 @@ static int zram_bvec_rw(struct zram *zram, struct bio_vec *bvec, u32 index,
ret = zram_bvec_write(zram, bvec, index, offset);
}
if (unlikely(ret)) {
if (rw == READ)
atomic64_inc(&zram->stats.failed_reads);
else
atomic64_inc(&zram->stats.failed_writes);
}
return ret;
}
......
......@@ -84,7 +84,7 @@ struct zram_stats {
atomic64_t compr_data_size; /* compressed size of pages stored */
atomic64_t num_reads; /* failed + successful */
atomic64_t num_writes; /* --do-- */
atomic64_t failed_reads; /* should NEVER! happen */
atomic64_t failed_reads; /* can happen when memory is too low */
atomic64_t failed_writes; /* can happen when memory is too low */
atomic64_t invalid_io; /* non-page-aligned I/O requests */
atomic64_t notify_free; /* no. of swap slot free notifications */
......
......@@ -717,12 +717,14 @@ static int s5m_rtc_probe(struct platform_device *pdev)
info->device_type = s5m87xx->device_type;
info->wtsr_smpl = s5m87xx->wtsr_smpl;
info->irq = regmap_irq_get_virq(s5m87xx->irq_data, alarm_irq);
if (info->irq <= 0) {
ret = -EINVAL;
dev_err(&pdev->dev, "Failed to get virtual IRQ %d\n",
if (s5m87xx->irq_data) {
info->irq = regmap_irq_get_virq(s5m87xx->irq_data, alarm_irq);
if (info->irq <= 0) {
ret = -EINVAL;
dev_err(&pdev->dev, "Failed to get virtual IRQ %d\n",
alarm_irq);
goto err;
goto err;
}
}
platform_set_drvdata(pdev, info);
......@@ -744,6 +746,11 @@ static int s5m_rtc_probe(struct platform_device *pdev)
goto err;
}
if (!info->irq) {
dev_info(&pdev->dev, "Alarm IRQ not available\n");
return 0;
}
ret = devm_request_threaded_irq(&pdev->dev, info->irq, NULL,
s5m_rtc_alarm_irq, 0, "rtc-alarm0",
info);
......@@ -802,7 +809,7 @@ static int s5m_rtc_resume(struct device *dev)
struct s5m_rtc_info *info = dev_get_drvdata(dev);
int ret = 0;
if (device_may_wakeup(dev))
if (info->irq && device_may_wakeup(dev))
ret = disable_irq_wake(info->irq);
return ret;
......@@ -813,7 +820,7 @@ static int s5m_rtc_suspend(struct device *dev)
struct s5m_rtc_info *info = dev_get_drvdata(dev);
int ret = 0;
if (device_may_wakeup(dev))
if (info->irq && device_may_wakeup(dev))
ret = enable_irq_wake(info->irq);
return ret;
......
......@@ -160,9 +160,18 @@ static void o2quo_make_decision(struct work_struct *work)
}
out:
spin_unlock(&qs->qs_lock);
if (fence)
if (fence) {
spin_unlock(&qs->qs_lock);
o2quo_fence_self();
} else {
mlog(ML_NOTICE, "not fencing this node, heartbeating: %d, "
"connected: %d, lowest: %d (%sreachable)\n",
qs->qs_heartbeating, qs->qs_connected, lowest_hb,
lowest_reachable ? "" : "un");
spin_unlock(&qs->qs_lock);
}
}
static void o2quo_set_hold(struct o2quo_state *qs, u8 node)
......
......@@ -1480,6 +1480,14 @@ static int o2net_set_nodelay(struct socket *sock)
return ret;
}
static int o2net_set_usertimeout(struct socket *sock)
{
int user_timeout = O2NET_TCP_USER_TIMEOUT;
return kernel_setsockopt(sock, SOL_TCP, TCP_USER_TIMEOUT,
(char *)&user_timeout, sizeof(user_timeout));
}
static void o2net_initialize_handshake(void)
{
o2net_hand->o2hb_heartbeat_timeout_ms = cpu_to_be32(
......@@ -1536,16 +1544,20 @@ static void o2net_idle_timer(unsigned long data)
#endif
printk(KERN_NOTICE "o2net: Connection to " SC_NODEF_FMT " has been "
"idle for %lu.%lu secs, shutting it down.\n", SC_NODEF_ARGS(sc),
msecs / 1000, msecs % 1000);
"idle for %lu.%lu secs.\n",
SC_NODEF_ARGS(sc), msecs / 1000, msecs % 1000);
/*
* Initialize the nn_timeout so that the next connection attempt
* will continue in o2net_start_connect.
/* idle timerout happen, don't shutdown the connection, but
* make fence decision. Maybe the connection can recover before
* the decision is made.
*/
atomic_set(&nn->nn_timeout, 1);
o2quo_conn_err(o2net_num_from_nn(nn));
queue_delayed_work(o2net_wq, &nn->nn_still_up,
msecs_to_jiffies(O2NET_QUORUM_DELAY_MS));
o2net_sc_reset_idle_timer(sc);
o2net_sc_queue_work(sc, &sc->sc_shutdown_work);
}
static void o2net_sc_reset_idle_timer(struct o2net_sock_container *sc)
......@@ -1560,6 +1572,15 @@ static void o2net_sc_reset_idle_timer(struct o2net_sock_container *sc)
static void o2net_sc_postpone_idle(struct o2net_sock_container *sc)
{
struct o2net_node *nn = o2net_nn_from_num(sc->sc_node->nd_num);
/* clear fence decision since the connection recover from timeout*/
if (atomic_read(&nn->nn_timeout)) {
o2quo_conn_up(o2net_num_from_nn(nn));
cancel_delayed_work(&nn->nn_still_up);
atomic_set(&nn->nn_timeout, 0);
}
/* Only push out an existing timer */
if (timer_pending(&sc->sc_idle_timeout))
o2net_sc_reset_idle_timer(sc);
......@@ -1650,6 +1671,12 @@ static void o2net_start_connect(struct work_struct *work)
goto out;
}
ret = o2net_set_usertimeout(sock);
if (ret) {
mlog(ML_ERROR, "set TCP_USER_TIMEOUT failed with %d\n", ret);
goto out;
}
o2net_register_callbacks(sc->sc_sock->sk, sc);
spin_lock(&nn->nn_lock);
......@@ -1831,6 +1858,12 @@ static int o2net_accept_one(struct socket *sock, int *more)
goto out;
}
ret = o2net_set_usertimeout(new_sock);
if (ret) {
mlog(ML_ERROR, "set TCP_USER_TIMEOUT failed with %d\n", ret);
goto out;
}
slen = sizeof(sin);
ret = new_sock->ops->getname(new_sock, (struct sockaddr *) &sin,
&slen, 1);
......
......@@ -63,6 +63,7 @@ typedef void (o2net_post_msg_handler_func)(int status, void *data,
#define O2NET_KEEPALIVE_DELAY_MS_DEFAULT 2000
#define O2NET_IDLE_TIMEOUT_MS_DEFAULT 30000
#define O2NET_TCP_USER_TIMEOUT 0x7fffffff
/* TODO: figure this out.... */
static inline int o2net_link_down(int err, struct socket *sock)
......
......@@ -35,9 +35,8 @@
copy_to_user((typeof(a) __user *)b, &(a), sizeof(a))
/*
* This call is void because we are already reporting an error that may
* be -EFAULT. The error will be returned from the ioctl(2) call. It's
* just a best-effort to tell userspace that this request caused the error.
* This is just a best-effort to tell userspace that this request
* caused the error.
*/
static inline void o2info_set_request_error(struct ocfs2_info_request *kreq,
struct ocfs2_info_request __user *req)
......@@ -146,136 +145,105 @@ static int ocfs2_set_inode_attr(struct inode *inode, unsigned flags,
static int ocfs2_info_handle_blocksize(struct inode *inode,
struct ocfs2_info_request __user *req)
{
int status = -EFAULT;
struct ocfs2_info_blocksize oib;
if (o2info_from_user(oib, req))
goto bail;
return -EFAULT;
oib.ib_blocksize = inode->i_sb->s_blocksize;
o2info_set_request_filled(&oib.ib_req);
if (o2info_to_user(oib, req))
goto bail;
status = 0;
bail:
if (status)
o2info_set_request_error(&oib.ib_req, req);
return -EFAULT;
return status;
return 0;
}
static int ocfs2_info_handle_clustersize(struct inode *inode,
struct ocfs2_info_request __user *req)
{
int status = -EFAULT;
struct ocfs2_info_clustersize oic;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
if (o2info_from_user(oic, req))
goto bail;
return -EFAULT;
oic.ic_clustersize = osb->s_clustersize;
o2info_set_request_filled(&oic.ic_req);
if (o2info_to_user(oic, req))
goto bail;
status = 0;
bail:
if (status)
o2info_set_request_error(&oic.ic_req, req);
return -EFAULT;
return status;
return 0;
}
static int ocfs2_info_handle_maxslots(struct inode *inode,
struct ocfs2_info_request __user *req)
{
int status = -EFAULT;
struct ocfs2_info_maxslots oim;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
if (o2info_from_user(oim, req))
goto bail;
return -EFAULT;
oim.im_max_slots = osb->max_slots;
o2info_set_request_filled(&oim.im_req);
if (o2info_to_user(oim, req))
goto bail;
return -EFAULT;
status = 0;
bail:
if (status)
o2info_set_request_error(&oim.im_req, req);
return status;
return 0;
}
static int ocfs2_info_handle_label(struct inode *inode,
struct ocfs2_info_request __user *req)
{
int status = -EFAULT;
struct ocfs2_info_label oil;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
if (o2info_from_user(oil, req))
goto bail;
return -EFAULT;
memcpy(oil.il_label, osb->vol_label, OCFS2_MAX_VOL_LABEL_LEN);
o2info_set_request_filled(&oil.il_req);
if (o2info_to_user(oil, req))
goto bail;
return -EFAULT;
status = 0;
bail:
if (status)
o2info_set_request_error(&oil.il_req, req);
return status;
return 0;
}
static int ocfs2_info_handle_uuid(struct inode *inode,
struct ocfs2_info_request __user *req)
{
int status = -EFAULT;
struct ocfs2_info_uuid oiu;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
if (o2info_from_user(oiu, req))
goto bail;
return -EFAULT;
memcpy(oiu.iu_uuid_str, osb->uuid_str, OCFS2_TEXT_UUID_LEN + 1);
o2info_set_request_filled(&oiu.iu_req);
if (o2info_to_user(oiu, req))
goto bail;
status = 0;
bail:
if (status)
o2info_set_request_error(&oiu.iu_req, req);
return -EFAULT;
return status;
return 0;
}
static int ocfs2_info_handle_fs_features(struct inode *inode,
struct ocfs2_info_request __user *req)
{
int status = -EFAULT;
struct ocfs2_info_fs_features oif;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
if (o2info_from_user(oif, req))
goto bail;
return -EFAULT;
oif.if_compat_features = osb->s_feature_compat;
oif.if_incompat_features = osb->s_feature_incompat;
......@@ -284,39 +252,28 @@ static int ocfs2_info_handle_fs_features(struct inode *inode,
o2info_set_request_filled(&oif.if_req);
if (o2info_to_user(oif, req))
goto bail;
return -EFAULT;
status = 0;
bail:
if (status)
o2info_set_request_error(&oif.if_req, req);
return status;
return 0;
}
static int ocfs2_info_handle_journal_size(struct inode *inode,
struct ocfs2_info_request __user *req)
{
int status = -EFAULT;
struct ocfs2_info_journal_size oij;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
if (o2info_from_user(oij, req))
goto bail;
return -EFAULT;
oij.ij_journal_size = i_size_read(osb->journal->j_inode);
o2info_set_request_filled(&oij.ij_req);
if (o2info_to_user(oij, req))
goto bail;
return -EFAULT;
status = 0;
bail:
if (status)
o2info_set_request_error(&oij.ij_req, req);
return status;
return 0;
}
static int ocfs2_info_scan_inode_alloc(struct ocfs2_super *osb,
......@@ -373,7 +330,7 @@ static int ocfs2_info_handle_freeinode(struct inode *inode,
u32 i;
u64 blkno = -1;
char namebuf[40];
int status = -EFAULT, type = INODE_ALLOC_SYSTEM_INODE;
int status, type = INODE_ALLOC_SYSTEM_INODE;
struct ocfs2_info_freeinode *oifi = NULL;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
struct inode *inode_alloc = NULL;
......@@ -385,8 +342,10 @@ static int ocfs2_info_handle_freeinode(struct inode *inode,
goto out_err;
}
if (o2info_from_user(*oifi, req))
goto bail;
if (o2info_from_user(*oifi, req)) {
status = -EFAULT;
goto out_free;
}
oifi->ifi_slotnum = osb->max_slots;
......@@ -424,14 +383,16 @@ static int ocfs2_info_handle_freeinode(struct inode *inode,
o2info_set_request_filled(&oifi->ifi_req);
if (o2info_to_user(*oifi, req))
goto bail;
if (o2info_to_user(*oifi, req)) {
status = -EFAULT;
goto out_free;
}
status = 0;
bail:
if (status)
o2info_set_request_error(&oifi->ifi_req, req);
out_free:
kfree(oifi);
out_err:
return status;
......@@ -658,7 +619,7 @@ static int ocfs2_info_handle_freefrag(struct inode *inode,
{
u64 blkno = -1;
char namebuf[40];
int status = -EFAULT, type = GLOBAL_BITMAP_SYSTEM_INODE;
int status, type = GLOBAL_BITMAP_SYSTEM_INODE;
struct ocfs2_info_freefrag *oiff;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
......@@ -671,8 +632,10 @@ static int ocfs2_info_handle_freefrag(struct inode *inode,
goto out_err;
}
if (o2info_from_user(*oiff, req))
goto bail;
if (o2info_from_user(*oiff, req)) {
status = -EFAULT;
goto out_free;
}
/*
* chunksize from userspace should be power of 2.
*/
......@@ -711,14 +674,14 @@ static int ocfs2_info_handle_freefrag(struct inode *inode,
if (o2info_to_user(*oiff, req)) {
status = -EFAULT;
goto bail;
goto out_free;
}
status = 0;
bail:
if (status)
o2info_set_request_error(&oiff->iff_req, req);
out_free:
kfree(oiff);
out_err:
return status;
......@@ -727,23 +690,17 @@ static int ocfs2_info_handle_freefrag(struct inode *inode,
static int ocfs2_info_handle_unknown(struct inode *inode,
struct ocfs2_info_request __user *req)
{
int status = -EFAULT;
struct ocfs2_info_request oir;
if (o2info_from_user(oir, req))
goto bail;
return -EFAULT;
o2info_clear_request_filled(&oir);
if (o2info_to_user(oir, req))
goto bail;
return -EFAULT;
status = 0;
bail:
if (status)
o2info_set_request_error(&oir, req);
return status;
return 0;
}
/*
......
......@@ -13,7 +13,7 @@
#ifndef _UAPI_LINUX_XATTR_H
#define _UAPI_LINUX_XATTR_H
#ifdef __UAPI_DEF_XATTR
#if __UAPI_DEF_XATTR
#define __USE_KERNEL_XATTR_DEFS
#define XATTR_CREATE 0x1 /* set value, fail if attr already exists */
......
......@@ -64,7 +64,9 @@ bool kexec_in_progress = false;
char __weak kexec_purgatory[0];
size_t __weak kexec_purgatory_size = 0;
#ifdef CONFIG_KEXEC_FILE
static int kexec_calculate_store_digests(struct kimage *image);
#endif
/* Location of the reserved area for the crash kernel */
struct resource crashk_res = {
......@@ -341,6 +343,7 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry,
return ret;
}
#ifdef CONFIG_KEXEC_FILE
static int copy_file_from_fd(int fd, void **buf, unsigned long *buf_len)
{
struct fd f = fdget(fd);
......@@ -612,6 +615,9 @@ kimage_file_alloc_init(struct kimage **rimage, int kernel_fd,
kfree(image);
return ret;
}
#else /* CONFIG_KEXEC_FILE */
static inline void kimage_file_post_load_cleanup(struct kimage *image) { }
#endif /* CONFIG_KEXEC_FILE */
static int kimage_is_destination_range(struct kimage *image,
unsigned long start,
......@@ -1375,6 +1381,7 @@ COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry,
}
#endif
#ifdef CONFIG_KEXEC_FILE
SYSCALL_DEFINE5(kexec_file_load, int, kernel_fd, int, initrd_fd,
unsigned long, cmdline_len, const char __user *, cmdline_ptr,
unsigned long, flags)
......@@ -1451,6 +1458,8 @@ SYSCALL_DEFINE5(kexec_file_load, int, kernel_fd, int, initrd_fd,
return ret;
}
#endif /* CONFIG_KEXEC_FILE */
void crash_kexec(struct pt_regs *regs)
{
/* Take the kexec_mutex here to prevent sys_kexec_load
......@@ -2006,6 +2015,7 @@ static int __init crash_save_vmcoreinfo_init(void)
subsys_initcall(crash_save_vmcoreinfo_init);
#ifdef CONFIG_KEXEC_FILE
static int __kexec_add_segment(struct kimage *image, char *buf,
unsigned long bufsz, unsigned long mem,
unsigned long memsz)
......@@ -2682,6 +2692,7 @@ int kexec_purgatory_get_set_symbol(struct kimage *image, const char *name,
return 0;
}
#endif /* CONFIG_KEXEC_FILE */
/*
* Move into place and start executing a preloaded standalone
......
......@@ -351,15 +351,12 @@ static int find_next_iomem_res(struct resource *res, char *name,
end = res->end;
BUG_ON(start >= end);
read_lock(&resource_lock);
if (first_level_children_only) {
p = iomem_resource.child;
if (first_level_children_only)
sibling_only = true;
} else
p = &iomem_resource;
while ((p = next_resource(p, sibling_only))) {
read_lock(&resource_lock);
for (p = iomem_resource.child; p; p = next_resource(p, sibling_only)) {
if (p->flags != res->flags)
continue;
if (name && strcmp(p->name, name))
......
......@@ -1036,8 +1036,13 @@ config TRACE_IRQFLAGS
either tracing or lock debugging.
config STACKTRACE
bool
bool "Stack backtrace support"
depends on STACKTRACE_SUPPORT
help
This option causes the kernel to create a /proc/pid/stack for
every process, showing its current stack trace.
It is also used by various kernel debugging features that require
stack trace generation.
config DEBUG_KOBJECT
bool "kobject debugging"
......
......@@ -217,7 +217,7 @@ void hugetlb_cgroup_uncharge_page(int idx, unsigned long nr_pages,
if (hugetlb_cgroup_disabled())
return;
VM_BUG_ON(!spin_is_locked(&hugetlb_lock));
lockdep_assert_held(&hugetlb_lock);
h_cg = hugetlb_cgroup_from_page(page);
if (unlikely(!h_cg))
return;
......
......@@ -192,8 +192,7 @@ phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t size,
phys_addr_t align, phys_addr_t start,
phys_addr_t end, int nid)
{
int ret;
phys_addr_t kernel_end;
phys_addr_t kernel_end, ret;
/* pump up @end */
if (end == MEMBLOCK_ALLOC_ACCESSIBLE)
......
......@@ -751,7 +751,7 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
unsigned long pfn = pte_pfn(pte);
if (HAVE_PTE_SPECIAL) {
if (likely(!pte_special(pte) || pte_numa(pte)))
if (likely(!pte_special(pte)))
goto check_pfn;
if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
return NULL;
......@@ -777,15 +777,14 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
}
}
if (is_zero_pfn(pfn))
return NULL;
check_pfn:
if (unlikely(pfn > highest_memmap_pfn)) {
print_bad_pte(vma, addr, pte, NULL);
return NULL;
}
if (is_zero_pfn(pfn))
return NULL;
/*
* NOTE! We still have PageReserved() pages in the page tables.
* eg. VDSO mappings can cause them to exist.
......
......@@ -195,7 +195,7 @@ void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
pmd_t entry = *pmdp;
if (pmd_numa(entry))
entry = pmd_mknonnuma(entry);
set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(*pmdp));
set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(entry));
flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
......
......@@ -195,6 +195,7 @@ static struct zpool_driver zbud_zpool_driver = {
.total_size = zbud_zpool_total_size,
};
MODULE_ALIAS("zpool-zbud");
#endif /* CONFIG_ZPOOL */
/*****************
......
......@@ -150,7 +150,7 @@ struct zpool *zpool_create_pool(char *type, gfp_t gfp, struct zpool_ops *ops)
driver = zpool_get_driver(type);
if (!driver) {
request_module(type);
request_module("zpool-%s", type);
driver = zpool_get_driver(type);
}
......
......@@ -315,6 +315,7 @@ static struct zpool_driver zs_zpool_driver = {
.total_size = zs_zpool_total_size,
};
MODULE_ALIAS("zpool-zsmalloc");
#endif /* CONFIG_ZPOOL */
/* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
......
......@@ -2133,7 +2133,7 @@ sub process {
# Check for improperly formed commit descriptions
if ($in_commit_log &&
$line =~ /\bcommit\s+[0-9a-f]{5,}/i &&
$line !~ /\b[Cc]ommit [0-9a-f]{12,16} \("/) {
$line !~ /\b[Cc]ommit [0-9a-f]{12,40} \("/) {
$line =~ /\b(c)ommit\s+([0-9a-f]{5,})/i;
my $init_char = $1;
my $orig_commit = lc($2);
......@@ -2141,7 +2141,7 @@ sub process {
my $desc = 'commit description';
($id, $desc) = git_commit_info($orig_commit, $id, $desc);
ERROR("GIT_COMMIT_ID",
"Please use 12 to 16 chars for the git commit ID like: '${init_char}ommit $id (\"$desc\")'\n" . $herecurr);
"Please use 12 or more chars for the git commit ID like: '${init_char}ommit $id (\"$desc\")'\n" . $herecurr);
}
# Check for added, moved or deleted files
......
uname_M := $(shell uname -m 2>/dev/null || echo not)
ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/i386/)
ifeq ($(ARCH),i386)
ARCH := X86
ARCH := x86
CFLAGS := -DCONFIG_X86_32 -D__i386__
endif
ifeq ($(ARCH),x86_64)
ARCH := X86
ARCH := x86
CFLAGS := -DCONFIG_X86_64 -D__x86_64__
endif
CFLAGS += -I../../../../usr/include/
all:
ifeq ($(ARCH),X86)
ifeq ($(ARCH),x86)
gcc $(CFLAGS) msgque.c -o msgque_test
else
echo "Not an x86 target, can't build msgque selftest"
......
uname_M := $(shell uname -m 2>/dev/null || echo not)
ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/i386/)
ifeq ($(ARCH),i386)
ARCH := X86
ARCH := x86
CFLAGS := -DCONFIG_X86_32 -D__i386__
endif
ifeq ($(ARCH),x86_64)
ARCH := X86
ARCH := x86
CFLAGS := -DCONFIG_X86_64 -D__x86_64__
endif
......@@ -15,7 +15,7 @@ CFLAGS += -I../../../../usr/include/
CFLAGS += -I../../../../arch/x86/include/
all:
ifeq ($(ARCH),X86)
ifeq ($(ARCH),x86)
gcc $(CFLAGS) kcmp_test.c -o kcmp_test
else
echo "Not an x86 target, can't build kcmp selftest"
......
uname_M := $(shell uname -m 2>/dev/null || echo not)
ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/i386/)
ifeq ($(ARCH),i386)
ARCH := X86
ARCH := x86
endif
ifeq ($(ARCH),x86_64)
ARCH := X86
ARCH := x86
endif
CFLAGS += -D_FILE_OFFSET_BITS=64
......@@ -14,20 +14,20 @@ CFLAGS += -I../../../../include/uapi/
CFLAGS += -I../../../../include/
all:
ifeq ($(ARCH),X86)
ifeq ($(ARCH),x86)
gcc $(CFLAGS) memfd_test.c -o memfd_test
else
echo "Not an x86 target, can't build memfd selftest"
endif
run_tests: all
ifeq ($(ARCH),X86)
ifeq ($(ARCH),x86)
gcc $(CFLAGS) memfd_test.c -o memfd_test
endif
@./memfd_test || echo "memfd_test: [FAIL]"
build_fuse:
ifeq ($(ARCH),X86)
ifeq ($(ARCH),x86)
gcc $(CFLAGS) fuse_mnt.c `pkg-config fuse --cflags --libs` -o fuse_mnt
gcc $(CFLAGS) fuse_test.c -o fuse_test
else
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment