Commit 35776f10 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm

Pull ARM development updates from Russell King:

 - Rename "mod_init" and "mod_exit" so that initcall debug output is
   actually useful (Randy Dunlap)

 - Update maintainers entries for linux-arm-kernel to indicate it is
   moderated for non-subscribers (Randy Dunlap)

 - Move install rules to arch/arm/Makefile (Masahiro Yamada)

 - Drop unnecessary ARCH_NR_GPIOS definition (Linus Walleij)

 - Don't warn about atags_to_fdt() stack size (David Heidelberg)

 - Speed up unaligned copy_{from,to}_kernel_nofault (Arnd Bergmann)

 - Get rid of set_fs() usage (Arnd Bergmann)

 - Remove checks for GCC prior to v4.6 (Geert Uytterhoeven)

* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm:
  ARM: 9118/1: div64: Remove always-true __div64_const32_is_OK() duplicate
  ARM: 9117/1: asm-generic: div64: Remove always-true __div64_const32_is_OK()
  ARM: 9116/1: unified: Remove check for gcc < 4
  ARM: 9110/1: oabi-compat: fix oabi epoll sparse warning
  ARM: 9113/1: uaccess: remove set_fs() implementation
  ARM: 9112/1: uaccess: add __{get,put}_kernel_nofault
  ARM: 9111/1: oabi-compat: rework fcntl64() emulation
  ARM: 9114/1: oabi-compat: rework sys_semtimedop emulation
  ARM: 9108/1: oabi-compat: rework epoll_wait/epoll_pwait emulation
  ARM: 9107/1: syscall: always store thread_info->abi_syscall
  ARM: 9109/1: oabi-compat: add epoll_pwait handler
  ARM: 9106/1: traps: use get_kernel_nofault instead of set_fs()
  ARM: 9115/1: mm/maccess: fix unaligned copy_{from,to}_kernel_nofault
  ARM: 9105/1: atags_to_fdt: don't warn about stack size
  ARM: 9103/1: Drop ARCH_NR_GPIOS definition
  ARM: 9102/1: move theinstall rules to arch/arm/Makefile
  ARM: 9100/1: MAINTAINERS: mark all linux-arm-kernel@infradead list as moderated
  ARM: 9099/1: crypto: rename 'mod_init' & 'mod_exit' functions to be module-specific
parents 43175623 6c974e79
...@@ -2334,14 +2334,14 @@ N: oxnas ...@@ -2334,14 +2334,14 @@ N: oxnas
ARM/PALM TREO SUPPORT ARM/PALM TREO SUPPORT
M: Tomas Cech <sleep_walker@suse.com> M: Tomas Cech <sleep_walker@suse.com>
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
W: http://hackndev.com W: http://hackndev.com
F: arch/arm/mach-pxa/palmtreo.* F: arch/arm/mach-pxa/palmtreo.*
ARM/PALMTX,PALMT5,PALMLD,PALMTE2,PALMTC SUPPORT ARM/PALMTX,PALMT5,PALMLD,PALMTE2,PALMTC SUPPORT
M: Marek Vasut <marek.vasut@gmail.com> M: Marek Vasut <marek.vasut@gmail.com>
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
W: http://hackndev.com W: http://hackndev.com
F: arch/arm/mach-pxa/include/mach/palmld.h F: arch/arm/mach-pxa/include/mach/palmld.h
...@@ -2355,7 +2355,7 @@ F: arch/arm/mach-pxa/palmtx.c ...@@ -2355,7 +2355,7 @@ F: arch/arm/mach-pxa/palmtx.c
ARM/PALMZ72 SUPPORT ARM/PALMZ72 SUPPORT
M: Sergey Lapin <slapin@ossfans.org> M: Sergey Lapin <slapin@ossfans.org>
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
W: http://hackndev.com W: http://hackndev.com
F: arch/arm/mach-pxa/palmz72.* F: arch/arm/mach-pxa/palmz72.*
...@@ -2525,7 +2525,7 @@ N: s5pv210 ...@@ -2525,7 +2525,7 @@ N: s5pv210
ARM/SAMSUNG S5P SERIES 2D GRAPHICS ACCELERATION (G2D) SUPPORT ARM/SAMSUNG S5P SERIES 2D GRAPHICS ACCELERATION (G2D) SUPPORT
M: Andrzej Hajda <a.hajda@samsung.com> M: Andrzej Hajda <a.hajda@samsung.com>
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
S: Maintained S: Maintained
F: drivers/media/platform/s5p-g2d/ F: drivers/media/platform/s5p-g2d/
...@@ -2542,14 +2542,14 @@ ARM/SAMSUNG S5P SERIES JPEG CODEC SUPPORT ...@@ -2542,14 +2542,14 @@ ARM/SAMSUNG S5P SERIES JPEG CODEC SUPPORT
M: Andrzej Pietrasiewicz <andrzejtp2010@gmail.com> M: Andrzej Pietrasiewicz <andrzejtp2010@gmail.com>
M: Jacek Anaszewski <jacek.anaszewski@gmail.com> M: Jacek Anaszewski <jacek.anaszewski@gmail.com>
M: Sylwester Nawrocki <s.nawrocki@samsung.com> M: Sylwester Nawrocki <s.nawrocki@samsung.com>
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
S: Maintained S: Maintained
F: drivers/media/platform/s5p-jpeg/ F: drivers/media/platform/s5p-jpeg/
ARM/SAMSUNG S5P SERIES Multi Format Codec (MFC) SUPPORT ARM/SAMSUNG S5P SERIES Multi Format Codec (MFC) SUPPORT
M: Andrzej Hajda <a.hajda@samsung.com> M: Andrzej Hajda <a.hajda@samsung.com>
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
S: Maintained S: Maintained
F: drivers/media/platform/s5p-mfc/ F: drivers/media/platform/s5p-mfc/
...@@ -3568,7 +3568,7 @@ BROADCOM BCM5301X ARM ARCHITECTURE ...@@ -3568,7 +3568,7 @@ BROADCOM BCM5301X ARM ARCHITECTURE
M: Hauke Mehrtens <hauke@hauke-m.de> M: Hauke Mehrtens <hauke@hauke-m.de>
M: Rafał Miłecki <zajec5@gmail.com> M: Rafał Miłecki <zajec5@gmail.com>
M: bcm-kernel-feedback-list@broadcom.com M: bcm-kernel-feedback-list@broadcom.com
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: arch/arm/boot/dts/bcm470* F: arch/arm/boot/dts/bcm470*
F: arch/arm/boot/dts/bcm5301* F: arch/arm/boot/dts/bcm5301*
...@@ -3578,7 +3578,7 @@ F: arch/arm/mach-bcm/bcm_5301x.c ...@@ -3578,7 +3578,7 @@ F: arch/arm/mach-bcm/bcm_5301x.c
BROADCOM BCM53573 ARM ARCHITECTURE BROADCOM BCM53573 ARM ARCHITECTURE
M: Rafał Miłecki <rafal@milecki.pl> M: Rafał Miłecki <rafal@milecki.pl>
L: bcm-kernel-feedback-list@broadcom.com L: bcm-kernel-feedback-list@broadcom.com
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: arch/arm/boot/dts/bcm47189* F: arch/arm/boot/dts/bcm47189*
F: arch/arm/boot/dts/bcm53573* F: arch/arm/boot/dts/bcm53573*
...@@ -4874,7 +4874,7 @@ CPUIDLE DRIVER - ARM BIG LITTLE ...@@ -4874,7 +4874,7 @@ CPUIDLE DRIVER - ARM BIG LITTLE
M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
M: Daniel Lezcano <daniel.lezcano@linaro.org> M: Daniel Lezcano <daniel.lezcano@linaro.org>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git
F: drivers/cpuidle/cpuidle-big_little.c F: drivers/cpuidle/cpuidle-big_little.c
...@@ -4894,14 +4894,14 @@ CPUIDLE DRIVER - ARM PSCI ...@@ -4894,14 +4894,14 @@ CPUIDLE DRIVER - ARM PSCI
M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
M: Sudeep Holla <sudeep.holla@arm.com> M: Sudeep Holla <sudeep.holla@arm.com>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Supported S: Supported
F: drivers/cpuidle/cpuidle-psci.c F: drivers/cpuidle/cpuidle-psci.c
CPUIDLE DRIVER - ARM PSCI PM DOMAIN CPUIDLE DRIVER - ARM PSCI PM DOMAIN
M: Ulf Hansson <ulf.hansson@linaro.org> M: Ulf Hansson <ulf.hansson@linaro.org>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Supported S: Supported
F: drivers/cpuidle/cpuidle-psci.h F: drivers/cpuidle/cpuidle-psci.h
F: drivers/cpuidle/cpuidle-psci-domain.c F: drivers/cpuidle/cpuidle-psci-domain.c
...@@ -7272,7 +7272,7 @@ F: tools/firewire/ ...@@ -7272,7 +7272,7 @@ F: tools/firewire/
FIRMWARE FRAMEWORK FOR ARMV8-A FIRMWARE FRAMEWORK FOR ARMV8-A
M: Sudeep Holla <sudeep.holla@arm.com> M: Sudeep Holla <sudeep.holla@arm.com>
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: drivers/firmware/arm_ffa/ F: drivers/firmware/arm_ffa/
F: include/linux/arm_ffa.h F: include/linux/arm_ffa.h
...@@ -7451,7 +7451,7 @@ F: include/linux/platform_data/video-imxfb.h ...@@ -7451,7 +7451,7 @@ F: include/linux/platform_data/video-imxfb.h
FREESCALE IMX DDR PMU DRIVER FREESCALE IMX DDR PMU DRIVER
M: Frank Li <Frank.li@nxp.com> M: Frank Li <Frank.li@nxp.com>
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: Documentation/admin-guide/perf/imx-ddr.rst F: Documentation/admin-guide/perf/imx-ddr.rst
F: Documentation/devicetree/bindings/perf/fsl-imx-ddr.yaml F: Documentation/devicetree/bindings/perf/fsl-imx-ddr.yaml
...@@ -7543,7 +7543,7 @@ F: drivers/tty/serial/ucc_uart.c ...@@ -7543,7 +7543,7 @@ F: drivers/tty/serial/ucc_uart.c
FREESCALE SOC DRIVERS FREESCALE SOC DRIVERS
M: Li Yang <leoyang.li@nxp.com> M: Li Yang <leoyang.li@nxp.com>
L: linuxppc-dev@lists.ozlabs.org L: linuxppc-dev@lists.ozlabs.org
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/misc/fsl,dpaa2-console.yaml F: Documentation/devicetree/bindings/misc/fsl,dpaa2-console.yaml
F: Documentation/devicetree/bindings/soc/fsl/ F: Documentation/devicetree/bindings/soc/fsl/
...@@ -11191,7 +11191,7 @@ F: drivers/net/wireless/marvell/libertas/ ...@@ -11191,7 +11191,7 @@ F: drivers/net/wireless/marvell/libertas/
MARVELL MACCHIATOBIN SUPPORT MARVELL MACCHIATOBIN SUPPORT
M: Russell King <linux@armlinux.org.uk> M: Russell King <linux@armlinux.org.uk>
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: arch/arm64/boot/dts/marvell/armada-8040-mcbin.dts F: arch/arm64/boot/dts/marvell/armada-8040-mcbin.dts
...@@ -14272,7 +14272,7 @@ F: drivers/pci/controller/pcie-altera.c ...@@ -14272,7 +14272,7 @@ F: drivers/pci/controller/pcie-altera.c
PCI DRIVER FOR APPLIEDMICRO XGENE PCI DRIVER FOR APPLIEDMICRO XGENE
M: Toan Le <toan@os.amperecomputing.com> M: Toan Le <toan@os.amperecomputing.com>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/pci/xgene-pci.txt F: Documentation/devicetree/bindings/pci/xgene-pci.txt
F: drivers/pci/controller/pci-xgene.c F: drivers/pci/controller/pci-xgene.c
...@@ -14280,7 +14280,7 @@ F: drivers/pci/controller/pci-xgene.c ...@@ -14280,7 +14280,7 @@ F: drivers/pci/controller/pci-xgene.c
PCI DRIVER FOR ARM VERSATILE PLATFORM PCI DRIVER FOR ARM VERSATILE PLATFORM
M: Rob Herring <robh@kernel.org> M: Rob Herring <robh@kernel.org>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/pci/versatile.yaml F: Documentation/devicetree/bindings/pci/versatile.yaml
F: drivers/pci/controller/pci-versatile.c F: drivers/pci/controller/pci-versatile.c
...@@ -14288,7 +14288,7 @@ F: drivers/pci/controller/pci-versatile.c ...@@ -14288,7 +14288,7 @@ F: drivers/pci/controller/pci-versatile.c
PCI DRIVER FOR ARMADA 8K PCI DRIVER FOR ARMADA 8K
M: Thomas Petazzoni <thomas.petazzoni@bootlin.com> M: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/pci/pci-armada8k.txt F: Documentation/devicetree/bindings/pci/pci-armada8k.txt
F: drivers/pci/controller/dwc/pcie-armada8k.c F: drivers/pci/controller/dwc/pcie-armada8k.c
...@@ -14306,7 +14306,7 @@ M: Mingkai Hu <mingkai.hu@nxp.com> ...@@ -14306,7 +14306,7 @@ M: Mingkai Hu <mingkai.hu@nxp.com>
M: Roy Zang <roy.zang@nxp.com> M: Roy Zang <roy.zang@nxp.com>
L: linuxppc-dev@lists.ozlabs.org L: linuxppc-dev@lists.ozlabs.org
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: drivers/pci/controller/dwc/*layerscape* F: drivers/pci/controller/dwc/*layerscape*
...@@ -14386,7 +14386,7 @@ F: drivers/pci/controller/pci-tegra.c ...@@ -14386,7 +14386,7 @@ F: drivers/pci/controller/pci-tegra.c
PCI DRIVER FOR NXP LAYERSCAPE GEN4 CONTROLLER PCI DRIVER FOR NXP LAYERSCAPE GEN4 CONTROLLER
M: Hou Zhiqiang <Zhiqiang.Hou@nxp.com> M: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/pci/layerscape-pcie-gen4.txt F: Documentation/devicetree/bindings/pci/layerscape-pcie-gen4.txt
F: drivers/pci/controller/mobiveil/pcie-layerscape-gen4.c F: drivers/pci/controller/mobiveil/pcie-layerscape-gen4.c
...@@ -14421,7 +14421,7 @@ PCI DRIVER FOR TI DRA7XX/J721E ...@@ -14421,7 +14421,7 @@ PCI DRIVER FOR TI DRA7XX/J721E
M: Kishon Vijay Abraham I <kishon@ti.com> M: Kishon Vijay Abraham I <kishon@ti.com>
L: linux-omap@vger.kernel.org L: linux-omap@vger.kernel.org
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Supported S: Supported
F: Documentation/devicetree/bindings/pci/ti-pci.txt F: Documentation/devicetree/bindings/pci/ti-pci.txt
F: drivers/pci/controller/cadence/pci-j721e.c F: drivers/pci/controller/cadence/pci-j721e.c
...@@ -14477,7 +14477,7 @@ F: drivers/pci/controller/pcie-altera-msi.c ...@@ -14477,7 +14477,7 @@ F: drivers/pci/controller/pcie-altera-msi.c
PCI MSI DRIVER FOR APPLIEDMICRO XGENE PCI MSI DRIVER FOR APPLIEDMICRO XGENE
M: Toan Le <toan@os.amperecomputing.com> M: Toan Le <toan@os.amperecomputing.com>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/pci/xgene-pci-msi.txt F: Documentation/devicetree/bindings/pci/xgene-pci-msi.txt
F: drivers/pci/controller/pci-xgene-msi.c F: drivers/pci/controller/pci-xgene-msi.c
...@@ -14994,7 +14994,7 @@ F: include/linux/dtpm.h ...@@ -14994,7 +14994,7 @@ F: include/linux/dtpm.h
POWER STATE COORDINATION INTERFACE (PSCI) POWER STATE COORDINATION INTERFACE (PSCI)
M: Mark Rutland <mark.rutland@arm.com> M: Mark Rutland <mark.rutland@arm.com>
M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: drivers/firmware/psci/ F: drivers/firmware/psci/
F: include/linux/psci.h F: include/linux/psci.h
...@@ -15519,7 +15519,7 @@ F: arch/hexagon/ ...@@ -15519,7 +15519,7 @@ F: arch/hexagon/
QUALCOMM HIDMA DRIVER QUALCOMM HIDMA DRIVER
M: Sinan Kaya <okaya@kernel.org> M: Sinan Kaya <okaya@kernel.org>
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-arm-msm@vger.kernel.org L: linux-arm-msm@vger.kernel.org
L: dmaengine@vger.kernel.org L: dmaengine@vger.kernel.org
S: Supported S: Supported
...@@ -17233,7 +17233,7 @@ SECURE MONITOR CALL(SMC) CALLING CONVENTION (SMCCC) ...@@ -17233,7 +17233,7 @@ SECURE MONITOR CALL(SMC) CALLING CONVENTION (SMCCC)
M: Mark Rutland <mark.rutland@arm.com> M: Mark Rutland <mark.rutland@arm.com>
M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
M: Sudeep Holla <sudeep.holla@arm.com> M: Sudeep Holla <sudeep.holla@arm.com>
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: drivers/firmware/smccc/ F: drivers/firmware/smccc/
F: include/linux/arm-smccc.h F: include/linux/arm-smccc.h
...@@ -17350,7 +17350,7 @@ F: drivers/media/pci/solo6x10/ ...@@ -17350,7 +17350,7 @@ F: drivers/media/pci/solo6x10/
SOFTWARE DELEGATED EXCEPTION INTERFACE (SDEI) SOFTWARE DELEGATED EXCEPTION INTERFACE (SDEI)
M: James Morse <james.morse@arm.com> M: James Morse <james.morse@arm.com>
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/arm/firmware/sdei.txt F: Documentation/devicetree/bindings/arm/firmware/sdei.txt
F: drivers/firmware/arm_sdei.c F: drivers/firmware/arm_sdei.c
...@@ -18137,7 +18137,7 @@ F: drivers/mfd/syscon.c ...@@ -18137,7 +18137,7 @@ F: drivers/mfd/syscon.c
SYSTEM CONTROL & POWER/MANAGEMENT INTERFACE (SCPI/SCMI) Message Protocol drivers SYSTEM CONTROL & POWER/MANAGEMENT INTERFACE (SCPI/SCMI) Message Protocol drivers
M: Sudeep Holla <sudeep.holla@arm.com> M: Sudeep Holla <sudeep.holla@arm.com>
R: Cristian Marussi <cristian.marussi@arm.com> R: Cristian Marussi <cristian.marussi@arm.com>
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/firmware/arm,sc[mp]i.yaml F: Documentation/devicetree/bindings/firmware/arm,sc[mp]i.yaml
F: drivers/clk/clk-sc[mp]i.c F: drivers/clk/clk-sc[mp]i.c
...@@ -18510,7 +18510,7 @@ TEXAS INSTRUMENTS' SYSTEM CONTROL INTERFACE (TISCI) PROTOCOL DRIVER ...@@ -18510,7 +18510,7 @@ TEXAS INSTRUMENTS' SYSTEM CONTROL INTERFACE (TISCI) PROTOCOL DRIVER
M: Nishanth Menon <nm@ti.com> M: Nishanth Menon <nm@ti.com>
M: Tero Kristo <kristo@kernel.org> M: Tero Kristo <kristo@kernel.org>
M: Santosh Shilimkar <ssantosh@kernel.org> M: Santosh Shilimkar <ssantosh@kernel.org>
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/arm/keystone/ti,k3-sci-common.yaml F: Documentation/devicetree/bindings/arm/keystone/ti,k3-sci-common.yaml
F: Documentation/devicetree/bindings/arm/keystone/ti,sci.txt F: Documentation/devicetree/bindings/arm/keystone/ti,sci.txt
......
...@@ -124,7 +124,6 @@ config ARM ...@@ -124,7 +124,6 @@ config ARM
select PCI_SYSCALL if PCI select PCI_SYSCALL if PCI
select PERF_USE_VMALLOC select PERF_USE_VMALLOC
select RTC_LIB select RTC_LIB
select SET_FS
select SYS_SUPPORTS_APM_EMULATION select SYS_SUPPORTS_APM_EMULATION
select TRACE_IRQFLAGS_SUPPORT if !CPU_V7M select TRACE_IRQFLAGS_SUPPORT if !CPU_V7M
# Above selects are sorted alphabetically; please add new ones # Above selects are sorted alphabetically; please add new ones
......
...@@ -308,7 +308,8 @@ $(BOOT_TARGETS): vmlinux ...@@ -308,7 +308,8 @@ $(BOOT_TARGETS): vmlinux
@$(kecho) ' Kernel: $(boot)/$@ is ready' @$(kecho) ' Kernel: $(boot)/$@ is ready'
$(INSTALL_TARGETS): $(INSTALL_TARGETS):
$(Q)$(MAKE) $(build)=$(boot) MACHINE=$(MACHINE) $@ $(CONFIG_SHELL) $(srctree)/$(boot)/install.sh "$(KERNELRELEASE)" \
$(boot)/$(patsubst %install,%Image,$@) System.map "$(INSTALL_PATH)"
PHONY += vdso_install PHONY += vdso_install
vdso_install: vdso_install:
......
...@@ -96,23 +96,11 @@ $(obj)/bootp/bootp: $(obj)/zImage initrd FORCE ...@@ -96,23 +96,11 @@ $(obj)/bootp/bootp: $(obj)/zImage initrd FORCE
$(obj)/bootpImage: $(obj)/bootp/bootp FORCE $(obj)/bootpImage: $(obj)/bootp/bootp FORCE
$(call if_changed,objcopy) $(call if_changed,objcopy)
PHONY += initrd install zinstall uinstall PHONY += initrd
initrd: initrd:
@test "$(INITRD_PHYS)" != "" || \ @test "$(INITRD_PHYS)" != "" || \
(echo This machine does not support INITRD; exit -1) (echo This machine does not support INITRD; exit -1)
@test "$(INITRD)" != "" || \ @test "$(INITRD)" != "" || \
(echo You must specify INITRD; exit -1) (echo You must specify INITRD; exit -1)
install:
$(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \
$(obj)/Image System.map "$(INSTALL_PATH)"
zinstall:
$(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \
$(obj)/zImage System.map "$(INSTALL_PATH)"
uinstall:
$(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \
$(obj)/uImage System.map "$(INSTALL_PATH)"
subdir- := bootp compressed dts subdir- := bootp compressed dts
...@@ -85,6 +85,8 @@ compress-$(CONFIG_KERNEL_LZ4) = lz4 ...@@ -85,6 +85,8 @@ compress-$(CONFIG_KERNEL_LZ4) = lz4
libfdt_objs := fdt_rw.o fdt_ro.o fdt_wip.o fdt.o libfdt_objs := fdt_rw.o fdt_ro.o fdt_wip.o fdt.o
ifeq ($(CONFIG_ARM_ATAG_DTB_COMPAT),y) ifeq ($(CONFIG_ARM_ATAG_DTB_COMPAT),y)
CFLAGS_REMOVE_atags_to_fdt.o += -Wframe-larger-than=${CONFIG_FRAME_WARN}
CFLAGS_atags_to_fdt.o += -Wframe-larger-than=1280
OBJS += $(libfdt_objs) atags_to_fdt.o OBJS += $(libfdt_objs) atags_to_fdt.o
endif endif
ifeq ($(CONFIG_USE_OF),y) ifeq ($(CONFIG_USE_OF),y)
......
...@@ -52,17 +52,6 @@ static inline uint32_t __div64_32(uint64_t *n, uint32_t base) ...@@ -52,17 +52,6 @@ static inline uint32_t __div64_32(uint64_t *n, uint32_t base)
#else #else
/*
* gcc versions earlier than 4.0 are simply too problematic for the
* __div64_const32() code in asm-generic/div64.h. First there is
* gcc PR 15089 that tend to trig on more complex constructs, spurious
* .global __udivsi3 are inserted even if none of those symbols are
* referenced in the generated code, and those gcc versions are not able
* to do constant propagation on long long values anyway.
*/
#define __div64_const32_is_OK (__GNUC__ >= 4)
static inline uint64_t __arch_xprod_64(uint64_t m, uint64_t n, bool bias) static inline uint64_t __arch_xprod_64(uint64_t m, uint64_t n, bool bias)
{ {
unsigned long long res; unsigned long long res;
......
...@@ -2,10 +2,6 @@ ...@@ -2,10 +2,6 @@
#ifndef _ARCH_ARM_GPIO_H #ifndef _ARCH_ARM_GPIO_H
#define _ARCH_ARM_GPIO_H #define _ARCH_ARM_GPIO_H
#if CONFIG_ARCH_NR_GPIO > 0
#define ARCH_NR_GPIOS CONFIG_ARCH_NR_GPIO
#endif
/* Note: this may rely upon the value of ARCH_NR_GPIOS set in mach/gpio.h */ /* Note: this may rely upon the value of ARCH_NR_GPIOS set in mach/gpio.h */
#include <asm-generic/gpio.h> #include <asm-generic/gpio.h>
......
...@@ -19,7 +19,6 @@ struct pt_regs { ...@@ -19,7 +19,6 @@ struct pt_regs {
struct svc_pt_regs { struct svc_pt_regs {
struct pt_regs regs; struct pt_regs regs;
u32 dacr; u32 dacr;
u32 addr_limit;
}; };
#define to_svc_pt_regs(r) container_of(r, struct svc_pt_regs, regs) #define to_svc_pt_regs(r) container_of(r, struct svc_pt_regs, regs)
......
...@@ -22,7 +22,21 @@ extern const unsigned long sys_call_table[]; ...@@ -22,7 +22,21 @@ extern const unsigned long sys_call_table[];
static inline int syscall_get_nr(struct task_struct *task, static inline int syscall_get_nr(struct task_struct *task,
struct pt_regs *regs) struct pt_regs *regs)
{ {
return task_thread_info(task)->syscall; if (IS_ENABLED(CONFIG_AEABI) && !IS_ENABLED(CONFIG_OABI_COMPAT))
return task_thread_info(task)->abi_syscall;
return task_thread_info(task)->abi_syscall & __NR_SYSCALL_MASK;
}
static inline bool __in_oabi_syscall(struct task_struct *task)
{
return IS_ENABLED(CONFIG_OABI_COMPAT) &&
(task_thread_info(task)->abi_syscall & __NR_OABI_SYSCALL_BASE);
}
static inline bool in_oabi_syscall(void)
{
return __in_oabi_syscall(current);
} }
static inline void syscall_rollback(struct task_struct *task, static inline void syscall_rollback(struct task_struct *task,
......
...@@ -31,8 +31,6 @@ struct task_struct; ...@@ -31,8 +31,6 @@ struct task_struct;
#include <asm/types.h> #include <asm/types.h>
typedef unsigned long mm_segment_t;
struct cpu_context_save { struct cpu_context_save {
__u32 r4; __u32 r4;
__u32 r5; __u32 r5;
...@@ -54,7 +52,6 @@ struct cpu_context_save { ...@@ -54,7 +52,6 @@ struct cpu_context_save {
struct thread_info { struct thread_info {
unsigned long flags; /* low level flags */ unsigned long flags; /* low level flags */
int preempt_count; /* 0 => preemptable, <0 => bug */ int preempt_count; /* 0 => preemptable, <0 => bug */
mm_segment_t addr_limit; /* address limit */
struct task_struct *task; /* main task structure */ struct task_struct *task; /* main task structure */
__u32 cpu; /* cpu */ __u32 cpu; /* cpu */
__u32 cpu_domain; /* cpu domain */ __u32 cpu_domain; /* cpu domain */
...@@ -62,7 +59,7 @@ struct thread_info { ...@@ -62,7 +59,7 @@ struct thread_info {
unsigned long stack_canary; unsigned long stack_canary;
#endif #endif
struct cpu_context_save cpu_context; /* cpu context */ struct cpu_context_save cpu_context; /* cpu context */
__u32 syscall; /* syscall number */ __u32 abi_syscall; /* ABI type and syscall nr */
__u8 used_cp[16]; /* thread used copro */ __u8 used_cp[16]; /* thread used copro */
unsigned long tp_value[2]; /* TLS registers */ unsigned long tp_value[2]; /* TLS registers */
union fp_state fpstate __attribute__((aligned(8))); union fp_state fpstate __attribute__((aligned(8)));
...@@ -77,7 +74,6 @@ struct thread_info { ...@@ -77,7 +74,6 @@ struct thread_info {
.task = &tsk, \ .task = &tsk, \
.flags = 0, \ .flags = 0, \
.preempt_count = INIT_PREEMPT_COUNT, \ .preempt_count = INIT_PREEMPT_COUNT, \
.addr_limit = KERNEL_DS, \
} }
/* /*
......
...@@ -84,12 +84,8 @@ ...@@ -84,12 +84,8 @@
* if \disable is set. * if \disable is set.
*/ */
.macro uaccess_entry, tsk, tmp0, tmp1, tmp2, disable .macro uaccess_entry, tsk, tmp0, tmp1, tmp2, disable
ldr \tmp1, [\tsk, #TI_ADDR_LIMIT]
ldr \tmp2, =TASK_SIZE
str \tmp2, [\tsk, #TI_ADDR_LIMIT]
DACR( mrc p15, 0, \tmp0, c3, c0, 0) DACR( mrc p15, 0, \tmp0, c3, c0, 0)
DACR( str \tmp0, [sp, #SVC_DACR]) DACR( str \tmp0, [sp, #SVC_DACR])
str \tmp1, [sp, #SVC_ADDR_LIMIT]
.if \disable && IS_ENABLED(CONFIG_CPU_SW_DOMAIN_PAN) .if \disable && IS_ENABLED(CONFIG_CPU_SW_DOMAIN_PAN)
/* kernel=client, user=no access */ /* kernel=client, user=no access */
mov \tmp2, #DACR_UACCESS_DISABLE mov \tmp2, #DACR_UACCESS_DISABLE
...@@ -106,9 +102,7 @@ ...@@ -106,9 +102,7 @@
/* Restore the user access state previously saved by uaccess_entry */ /* Restore the user access state previously saved by uaccess_entry */
.macro uaccess_exit, tsk, tmp0, tmp1 .macro uaccess_exit, tsk, tmp0, tmp1
ldr \tmp1, [sp, #SVC_ADDR_LIMIT]
DACR( ldr \tmp0, [sp, #SVC_DACR]) DACR( ldr \tmp0, [sp, #SVC_DACR])
str \tmp1, [\tsk, #TI_ADDR_LIMIT]
DACR( mcr p15, 0, \tmp0, c3, c0, 0) DACR( mcr p15, 0, \tmp0, c3, c0, 0)
.endm .endm
......
...@@ -52,32 +52,8 @@ static __always_inline void uaccess_restore(unsigned int flags) ...@@ -52,32 +52,8 @@ static __always_inline void uaccess_restore(unsigned int flags)
extern int __get_user_bad(void); extern int __get_user_bad(void);
extern int __put_user_bad(void); extern int __put_user_bad(void);
/*
* Note that this is actually 0x1,0000,0000
*/
#define KERNEL_DS 0x00000000
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
#define USER_DS TASK_SIZE
#define get_fs() (current_thread_info()->addr_limit)
static inline void set_fs(mm_segment_t fs)
{
current_thread_info()->addr_limit = fs;
/*
* Prevent a mispredicted conditional call to set_fs from forwarding
* the wrong address limit to access_ok under speculation.
*/
dsb(nsh);
isb();
modify_domain(DOMAIN_KERNEL, fs ? DOMAIN_CLIENT : DOMAIN_MANAGER);
}
#define uaccess_kernel() (get_fs() == KERNEL_DS)
/* /*
* We use 33-bit arithmetic here. Success returns zero, failure returns * We use 33-bit arithmetic here. Success returns zero, failure returns
* addr_limit. We take advantage that addr_limit will be zero for KERNEL_DS, * addr_limit. We take advantage that addr_limit will be zero for KERNEL_DS,
...@@ -89,7 +65,7 @@ static inline void set_fs(mm_segment_t fs) ...@@ -89,7 +65,7 @@ static inline void set_fs(mm_segment_t fs)
__asm__(".syntax unified\n" \ __asm__(".syntax unified\n" \
"adds %1, %2, %3; sbcscc %1, %1, %0; movcc %0, #0" \ "adds %1, %2, %3; sbcscc %1, %1, %0; movcc %0, #0" \
: "=&r" (flag), "=&r" (roksum) \ : "=&r" (flag), "=&r" (roksum) \
: "r" (addr), "Ir" (size), "0" (current_thread_info()->addr_limit) \ : "r" (addr), "Ir" (size), "0" (TASK_SIZE) \
: "cc"); \ : "cc"); \
flag; }) flag; })
...@@ -120,7 +96,7 @@ static inline void __user *__uaccess_mask_range_ptr(const void __user *ptr, ...@@ -120,7 +96,7 @@ static inline void __user *__uaccess_mask_range_ptr(const void __user *ptr,
" subshs %1, %1, %2\n" " subshs %1, %1, %2\n"
" movlo %0, #0\n" " movlo %0, #0\n"
: "+r" (safe_ptr), "=&r" (tmp) : "+r" (safe_ptr), "=&r" (tmp)
: "r" (size), "r" (current_thread_info()->addr_limit) : "r" (size), "r" (TASK_SIZE)
: "cc"); : "cc");
csdb(); csdb();
...@@ -194,7 +170,7 @@ extern int __get_user_64t_4(void *); ...@@ -194,7 +170,7 @@ extern int __get_user_64t_4(void *);
#define __get_user_check(x, p) \ #define __get_user_check(x, p) \
({ \ ({ \
unsigned long __limit = current_thread_info()->addr_limit - 1; \ unsigned long __limit = TASK_SIZE - 1; \
register typeof(*(p)) __user *__p asm("r0") = (p); \ register typeof(*(p)) __user *__p asm("r0") = (p); \
register __inttype(x) __r2 asm("r2"); \ register __inttype(x) __r2 asm("r2"); \
register unsigned long __l asm("r1") = __limit; \ register unsigned long __l asm("r1") = __limit; \
...@@ -245,7 +221,7 @@ extern int __put_user_8(void *, unsigned long long); ...@@ -245,7 +221,7 @@ extern int __put_user_8(void *, unsigned long long);
#define __put_user_check(__pu_val, __ptr, __err, __s) \ #define __put_user_check(__pu_val, __ptr, __err, __s) \
({ \ ({ \
unsigned long __limit = current_thread_info()->addr_limit - 1; \ unsigned long __limit = TASK_SIZE - 1; \
register typeof(__pu_val) __r2 asm("r2") = __pu_val; \ register typeof(__pu_val) __r2 asm("r2") = __pu_val; \
register const void __user *__p asm("r0") = __ptr; \ register const void __user *__p asm("r0") = __ptr; \
register unsigned long __l asm("r1") = __limit; \ register unsigned long __l asm("r1") = __limit; \
...@@ -262,19 +238,8 @@ extern int __put_user_8(void *, unsigned long long); ...@@ -262,19 +238,8 @@ extern int __put_user_8(void *, unsigned long long);
#else /* CONFIG_MMU */ #else /* CONFIG_MMU */
/*
* uClinux has only one addr space, so has simplified address limits.
*/
#define USER_DS KERNEL_DS
#define uaccess_kernel() (true)
#define __addr_ok(addr) ((void)(addr), 1) #define __addr_ok(addr) ((void)(addr), 1)
#define __range_ok(addr, size) ((void)(addr), 0) #define __range_ok(addr, size) ((void)(addr), 0)
#define get_fs() (KERNEL_DS)
static inline void set_fs(mm_segment_t fs)
{
}
#define get_user(x, p) __get_user(x, p) #define get_user(x, p) __get_user(x, p)
#define __put_user_check __put_user_nocheck #define __put_user_check __put_user_nocheck
...@@ -283,9 +248,6 @@ static inline void set_fs(mm_segment_t fs) ...@@ -283,9 +248,6 @@ static inline void set_fs(mm_segment_t fs)
#define access_ok(addr, size) (__range_ok(addr, size) == 0) #define access_ok(addr, size) (__range_ok(addr, size) == 0)
#define user_addr_max() \
(uaccess_kernel() ? ~0UL : get_fs())
#ifdef CONFIG_CPU_SPECTRE #ifdef CONFIG_CPU_SPECTRE
/* /*
* When mitigating Spectre variant 1, it is not worth fixing the non- * When mitigating Spectre variant 1, it is not worth fixing the non-
...@@ -308,11 +270,11 @@ static inline void set_fs(mm_segment_t fs) ...@@ -308,11 +270,11 @@ static inline void set_fs(mm_segment_t fs)
#define __get_user(x, ptr) \ #define __get_user(x, ptr) \
({ \ ({ \
long __gu_err = 0; \ long __gu_err = 0; \
__get_user_err((x), (ptr), __gu_err); \ __get_user_err((x), (ptr), __gu_err, TUSER()); \
__gu_err; \ __gu_err; \
}) })
#define __get_user_err(x, ptr, err) \ #define __get_user_err(x, ptr, err, __t) \
do { \ do { \
unsigned long __gu_addr = (unsigned long)(ptr); \ unsigned long __gu_addr = (unsigned long)(ptr); \
unsigned long __gu_val; \ unsigned long __gu_val; \
...@@ -321,18 +283,19 @@ do { \ ...@@ -321,18 +283,19 @@ do { \
might_fault(); \ might_fault(); \
__ua_flags = uaccess_save_and_enable(); \ __ua_flags = uaccess_save_and_enable(); \
switch (sizeof(*(ptr))) { \ switch (sizeof(*(ptr))) { \
case 1: __get_user_asm_byte(__gu_val, __gu_addr, err); break; \ case 1: __get_user_asm_byte(__gu_val, __gu_addr, err, __t); break; \
case 2: __get_user_asm_half(__gu_val, __gu_addr, err); break; \ case 2: __get_user_asm_half(__gu_val, __gu_addr, err, __t); break; \
case 4: __get_user_asm_word(__gu_val, __gu_addr, err); break; \ case 4: __get_user_asm_word(__gu_val, __gu_addr, err, __t); break; \
default: (__gu_val) = __get_user_bad(); \ default: (__gu_val) = __get_user_bad(); \
} \ } \
uaccess_restore(__ua_flags); \ uaccess_restore(__ua_flags); \
(x) = (__typeof__(*(ptr)))__gu_val; \ (x) = (__typeof__(*(ptr)))__gu_val; \
} while (0) } while (0)
#endif
#define __get_user_asm(x, addr, err, instr) \ #define __get_user_asm(x, addr, err, instr) \
__asm__ __volatile__( \ __asm__ __volatile__( \
"1: " TUSER(instr) " %1, [%2], #0\n" \ "1: " instr " %1, [%2], #0\n" \
"2:\n" \ "2:\n" \
" .pushsection .text.fixup,\"ax\"\n" \ " .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \ " .align 2\n" \
...@@ -348,40 +311,38 @@ do { \ ...@@ -348,40 +311,38 @@ do { \
: "r" (addr), "i" (-EFAULT) \ : "r" (addr), "i" (-EFAULT) \
: "cc") : "cc")
#define __get_user_asm_byte(x, addr, err) \ #define __get_user_asm_byte(x, addr, err, __t) \
__get_user_asm(x, addr, err, ldrb) __get_user_asm(x, addr, err, "ldrb" __t)
#if __LINUX_ARM_ARCH__ >= 6 #if __LINUX_ARM_ARCH__ >= 6
#define __get_user_asm_half(x, addr, err) \ #define __get_user_asm_half(x, addr, err, __t) \
__get_user_asm(x, addr, err, ldrh) __get_user_asm(x, addr, err, "ldrh" __t)
#else #else
#ifndef __ARMEB__ #ifndef __ARMEB__
#define __get_user_asm_half(x, __gu_addr, err) \ #define __get_user_asm_half(x, __gu_addr, err, __t) \
({ \ ({ \
unsigned long __b1, __b2; \ unsigned long __b1, __b2; \
__get_user_asm_byte(__b1, __gu_addr, err); \ __get_user_asm_byte(__b1, __gu_addr, err, __t); \
__get_user_asm_byte(__b2, __gu_addr + 1, err); \ __get_user_asm_byte(__b2, __gu_addr + 1, err, __t); \
(x) = __b1 | (__b2 << 8); \ (x) = __b1 | (__b2 << 8); \
}) })
#else #else
#define __get_user_asm_half(x, __gu_addr, err) \ #define __get_user_asm_half(x, __gu_addr, err, __t) \
({ \ ({ \
unsigned long __b1, __b2; \ unsigned long __b1, __b2; \
__get_user_asm_byte(__b1, __gu_addr, err); \ __get_user_asm_byte(__b1, __gu_addr, err, __t); \
__get_user_asm_byte(__b2, __gu_addr + 1, err); \ __get_user_asm_byte(__b2, __gu_addr + 1, err, __t); \
(x) = (__b1 << 8) | __b2; \ (x) = (__b1 << 8) | __b2; \
}) })
#endif #endif
#endif /* __LINUX_ARM_ARCH__ >= 6 */ #endif /* __LINUX_ARM_ARCH__ >= 6 */
#define __get_user_asm_word(x, addr, err) \ #define __get_user_asm_word(x, addr, err, __t) \
__get_user_asm(x, addr, err, ldr) __get_user_asm(x, addr, err, "ldr" __t)
#endif
#define __put_user_switch(x, ptr, __err, __fn) \ #define __put_user_switch(x, ptr, __err, __fn) \
do { \ do { \
...@@ -425,7 +386,7 @@ do { \ ...@@ -425,7 +386,7 @@ do { \
#define __put_user_nocheck(x, __pu_ptr, __err, __size) \ #define __put_user_nocheck(x, __pu_ptr, __err, __size) \
do { \ do { \
unsigned long __pu_addr = (unsigned long)__pu_ptr; \ unsigned long __pu_addr = (unsigned long)__pu_ptr; \
__put_user_nocheck_##__size(x, __pu_addr, __err); \ __put_user_nocheck_##__size(x, __pu_addr, __err, TUSER());\
} while (0) } while (0)
#define __put_user_nocheck_1 __put_user_asm_byte #define __put_user_nocheck_1 __put_user_asm_byte
...@@ -433,9 +394,11 @@ do { \ ...@@ -433,9 +394,11 @@ do { \
#define __put_user_nocheck_4 __put_user_asm_word #define __put_user_nocheck_4 __put_user_asm_word
#define __put_user_nocheck_8 __put_user_asm_dword #define __put_user_nocheck_8 __put_user_asm_dword
#endif /* !CONFIG_CPU_SPECTRE */
#define __put_user_asm(x, __pu_addr, err, instr) \ #define __put_user_asm(x, __pu_addr, err, instr) \
__asm__ __volatile__( \ __asm__ __volatile__( \
"1: " TUSER(instr) " %1, [%2], #0\n" \ "1: " instr " %1, [%2], #0\n" \
"2:\n" \ "2:\n" \
" .pushsection .text.fixup,\"ax\"\n" \ " .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \ " .align 2\n" \
...@@ -450,36 +413,36 @@ do { \ ...@@ -450,36 +413,36 @@ do { \
: "r" (x), "r" (__pu_addr), "i" (-EFAULT) \ : "r" (x), "r" (__pu_addr), "i" (-EFAULT) \
: "cc") : "cc")
#define __put_user_asm_byte(x, __pu_addr, err) \ #define __put_user_asm_byte(x, __pu_addr, err, __t) \
__put_user_asm(x, __pu_addr, err, strb) __put_user_asm(x, __pu_addr, err, "strb" __t)
#if __LINUX_ARM_ARCH__ >= 6 #if __LINUX_ARM_ARCH__ >= 6
#define __put_user_asm_half(x, __pu_addr, err) \ #define __put_user_asm_half(x, __pu_addr, err, __t) \
__put_user_asm(x, __pu_addr, err, strh) __put_user_asm(x, __pu_addr, err, "strh" __t)
#else #else
#ifndef __ARMEB__ #ifndef __ARMEB__
#define __put_user_asm_half(x, __pu_addr, err) \ #define __put_user_asm_half(x, __pu_addr, err, __t) \
({ \ ({ \
unsigned long __temp = (__force unsigned long)(x); \ unsigned long __temp = (__force unsigned long)(x); \
__put_user_asm_byte(__temp, __pu_addr, err); \ __put_user_asm_byte(__temp, __pu_addr, err, __t); \
__put_user_asm_byte(__temp >> 8, __pu_addr + 1, err); \ __put_user_asm_byte(__temp >> 8, __pu_addr + 1, err, __t);\
}) })
#else #else
#define __put_user_asm_half(x, __pu_addr, err) \ #define __put_user_asm_half(x, __pu_addr, err, __t) \
({ \ ({ \
unsigned long __temp = (__force unsigned long)(x); \ unsigned long __temp = (__force unsigned long)(x); \
__put_user_asm_byte(__temp >> 8, __pu_addr, err); \ __put_user_asm_byte(__temp >> 8, __pu_addr, err, __t); \
__put_user_asm_byte(__temp, __pu_addr + 1, err); \ __put_user_asm_byte(__temp, __pu_addr + 1, err, __t); \
}) })
#endif #endif
#endif /* __LINUX_ARM_ARCH__ >= 6 */ #endif /* __LINUX_ARM_ARCH__ >= 6 */
#define __put_user_asm_word(x, __pu_addr, err) \ #define __put_user_asm_word(x, __pu_addr, err, __t) \
__put_user_asm(x, __pu_addr, err, str) __put_user_asm(x, __pu_addr, err, "str" __t)
#ifndef __ARMEB__ #ifndef __ARMEB__
#define __reg_oper0 "%R2" #define __reg_oper0 "%R2"
...@@ -489,12 +452,12 @@ do { \ ...@@ -489,12 +452,12 @@ do { \
#define __reg_oper1 "%R2" #define __reg_oper1 "%R2"
#endif #endif
#define __put_user_asm_dword(x, __pu_addr, err) \ #define __put_user_asm_dword(x, __pu_addr, err, __t) \
__asm__ __volatile__( \ __asm__ __volatile__( \
ARM( "1: " TUSER(str) " " __reg_oper1 ", [%1], #4\n" ) \ ARM( "1: str" __t " " __reg_oper1 ", [%1], #4\n" ) \
ARM( "2: " TUSER(str) " " __reg_oper0 ", [%1]\n" ) \ ARM( "2: str" __t " " __reg_oper0 ", [%1]\n" ) \
THUMB( "1: " TUSER(str) " " __reg_oper1 ", [%1]\n" ) \ THUMB( "1: str" __t " " __reg_oper1 ", [%1]\n" ) \
THUMB( "2: " TUSER(str) " " __reg_oper0 ", [%1, #4]\n" ) \ THUMB( "2: str" __t " " __reg_oper0 ", [%1, #4]\n" ) \
"3:\n" \ "3:\n" \
" .pushsection .text.fixup,\"ax\"\n" \ " .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \ " .align 2\n" \
...@@ -510,7 +473,49 @@ do { \ ...@@ -510,7 +473,49 @@ do { \
: "r" (x), "i" (-EFAULT) \ : "r" (x), "i" (-EFAULT) \
: "cc") : "cc")
#endif /* !CONFIG_CPU_SPECTRE */ #define HAVE_GET_KERNEL_NOFAULT
#define __get_kernel_nofault(dst, src, type, err_label) \
do { \
const type *__pk_ptr = (src); \
unsigned long __src = (unsigned long)(__pk_ptr); \
type __val; \
int __err = 0; \
switch (sizeof(type)) { \
case 1: __get_user_asm_byte(__val, __src, __err, ""); break; \
case 2: __get_user_asm_half(__val, __src, __err, ""); break; \
case 4: __get_user_asm_word(__val, __src, __err, ""); break; \
case 8: { \
u32 *__v32 = (u32*)&__val; \
__get_user_asm_word(__v32[0], __src, __err, ""); \
if (__err) \
break; \
__get_user_asm_word(__v32[1], __src+4, __err, ""); \
break; \
} \
default: __err = __get_user_bad(); break; \
} \
*(type *)(dst) = __val; \
if (__err) \
goto err_label; \
} while (0)
#define __put_kernel_nofault(dst, src, type, err_label) \
do { \
const type *__pk_ptr = (dst); \
unsigned long __dst = (unsigned long)__pk_ptr; \
int __err = 0; \
type __val = *(type *)src; \
switch (sizeof(type)) { \
case 1: __put_user_asm_byte(__val, __dst, __err, ""); break; \
case 2: __put_user_asm_half(__val, __dst, __err, ""); break; \
case 4: __put_user_asm_word(__val, __dst, __err, ""); break; \
case 8: __put_user_asm_dword(__val, __dst, __err, ""); break; \
default: __err = __put_user_bad(); break; \
} \
if (__err) \
goto err_label; \
} while (0)
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
extern unsigned long __must_check extern unsigned long __must_check
......
...@@ -24,10 +24,6 @@ __asm__(".syntax unified"); ...@@ -24,10 +24,6 @@ __asm__(".syntax unified");
#ifdef CONFIG_THUMB2_KERNEL #ifdef CONFIG_THUMB2_KERNEL
#if __GNUC__ < 4
#error Thumb-2 kernel requires gcc >= 4
#endif
/* The CPSR bit describing the instruction set (Thumb) */ /* The CPSR bit describing the instruction set (Thumb) */
#define PSR_ISETSTATE PSR_T_BIT #define PSR_ISETSTATE PSR_T_BIT
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#define _UAPI__ASM_ARM_UNISTD_H #define _UAPI__ASM_ARM_UNISTD_H
#define __NR_OABI_SYSCALL_BASE 0x900000 #define __NR_OABI_SYSCALL_BASE 0x900000
#define __NR_SYSCALL_MASK 0x0fffff
#if defined(__thumb__) || defined(__ARM_EABI__) #if defined(__thumb__) || defined(__ARM_EABI__)
#define __NR_SYSCALL_BASE 0 #define __NR_SYSCALL_BASE 0
......
...@@ -43,11 +43,11 @@ int main(void) ...@@ -43,11 +43,11 @@ int main(void)
BLANK(); BLANK();
DEFINE(TI_FLAGS, offsetof(struct thread_info, flags)); DEFINE(TI_FLAGS, offsetof(struct thread_info, flags));
DEFINE(TI_PREEMPT, offsetof(struct thread_info, preempt_count)); DEFINE(TI_PREEMPT, offsetof(struct thread_info, preempt_count));
DEFINE(TI_ADDR_LIMIT, offsetof(struct thread_info, addr_limit));
DEFINE(TI_TASK, offsetof(struct thread_info, task)); DEFINE(TI_TASK, offsetof(struct thread_info, task));
DEFINE(TI_CPU, offsetof(struct thread_info, cpu)); DEFINE(TI_CPU, offsetof(struct thread_info, cpu));
DEFINE(TI_CPU_DOMAIN, offsetof(struct thread_info, cpu_domain)); DEFINE(TI_CPU_DOMAIN, offsetof(struct thread_info, cpu_domain));
DEFINE(TI_CPU_SAVE, offsetof(struct thread_info, cpu_context)); DEFINE(TI_CPU_SAVE, offsetof(struct thread_info, cpu_context));
DEFINE(TI_ABI_SYSCALL, offsetof(struct thread_info, abi_syscall));
DEFINE(TI_USED_CP, offsetof(struct thread_info, used_cp)); DEFINE(TI_USED_CP, offsetof(struct thread_info, used_cp));
DEFINE(TI_TP_VALUE, offsetof(struct thread_info, tp_value)); DEFINE(TI_TP_VALUE, offsetof(struct thread_info, tp_value));
DEFINE(TI_FPSTATE, offsetof(struct thread_info, fpstate)); DEFINE(TI_FPSTATE, offsetof(struct thread_info, fpstate));
...@@ -88,7 +88,6 @@ int main(void) ...@@ -88,7 +88,6 @@ int main(void)
DEFINE(S_OLD_R0, offsetof(struct pt_regs, ARM_ORIG_r0)); DEFINE(S_OLD_R0, offsetof(struct pt_regs, ARM_ORIG_r0));
DEFINE(PT_REGS_SIZE, sizeof(struct pt_regs)); DEFINE(PT_REGS_SIZE, sizeof(struct pt_regs));
DEFINE(SVC_DACR, offsetof(struct svc_pt_regs, dacr)); DEFINE(SVC_DACR, offsetof(struct svc_pt_regs, dacr));
DEFINE(SVC_ADDR_LIMIT, offsetof(struct svc_pt_regs, addr_limit));
DEFINE(SVC_REGS_SIZE, sizeof(struct svc_pt_regs)); DEFINE(SVC_REGS_SIZE, sizeof(struct svc_pt_regs));
BLANK(); BLANK();
DEFINE(SIGFRAME_RC3_OFFSET, offsetof(struct sigframe, retcode[3])); DEFINE(SIGFRAME_RC3_OFFSET, offsetof(struct sigframe, retcode[3]));
......
...@@ -49,10 +49,6 @@ __ret_fast_syscall: ...@@ -49,10 +49,6 @@ __ret_fast_syscall:
UNWIND(.fnstart ) UNWIND(.fnstart )
UNWIND(.cantunwind ) UNWIND(.cantunwind )
disable_irq_notrace @ disable interrupts disable_irq_notrace @ disable interrupts
ldr r2, [tsk, #TI_ADDR_LIMIT]
ldr r1, =TASK_SIZE
cmp r2, r1
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
movs r1, r1, lsl #16 movs r1, r1, lsl #16
bne fast_work_pending bne fast_work_pending
...@@ -87,10 +83,6 @@ __ret_fast_syscall: ...@@ -87,10 +83,6 @@ __ret_fast_syscall:
bl do_rseq_syscall bl do_rseq_syscall
#endif #endif
disable_irq_notrace @ disable interrupts disable_irq_notrace @ disable interrupts
ldr r2, [tsk, #TI_ADDR_LIMIT]
ldr r1, =TASK_SIZE
cmp r2, r1
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
movs r1, r1, lsl #16 movs r1, r1, lsl #16
beq no_work_pending beq no_work_pending
...@@ -129,10 +121,6 @@ ret_slow_syscall: ...@@ -129,10 +121,6 @@ ret_slow_syscall:
#endif #endif
disable_irq_notrace @ disable interrupts disable_irq_notrace @ disable interrupts
ENTRY(ret_to_user_from_irq) ENTRY(ret_to_user_from_irq)
ldr r2, [tsk, #TI_ADDR_LIMIT]
ldr r1, =TASK_SIZE
cmp r2, r1
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] ldr r1, [tsk, #TI_FLAGS]
movs r1, r1, lsl #16 movs r1, r1, lsl #16
bne slow_work_pending bne slow_work_pending
...@@ -226,6 +214,7 @@ ENTRY(vector_swi) ...@@ -226,6 +214,7 @@ ENTRY(vector_swi)
/* saved_psr and saved_pc are now dead */ /* saved_psr and saved_pc are now dead */
uaccess_disable tbl uaccess_disable tbl
get_thread_info tsk
adr tbl, sys_call_table @ load syscall table pointer adr tbl, sys_call_table @ load syscall table pointer
...@@ -237,13 +226,17 @@ ENTRY(vector_swi) ...@@ -237,13 +226,17 @@ ENTRY(vector_swi)
* get the old ABI syscall table address. * get the old ABI syscall table address.
*/ */
bics r10, r10, #0xff000000 bics r10, r10, #0xff000000
strne r10, [tsk, #TI_ABI_SYSCALL]
streq scno, [tsk, #TI_ABI_SYSCALL]
eorne scno, r10, #__NR_OABI_SYSCALL_BASE eorne scno, r10, #__NR_OABI_SYSCALL_BASE
ldrne tbl, =sys_oabi_call_table ldrne tbl, =sys_oabi_call_table
#elif !defined(CONFIG_AEABI) #elif !defined(CONFIG_AEABI)
bic scno, scno, #0xff000000 @ mask off SWI op-code bic scno, scno, #0xff000000 @ mask off SWI op-code
str scno, [tsk, #TI_ABI_SYSCALL]
eor scno, scno, #__NR_SYSCALL_BASE @ check OS number eor scno, scno, #__NR_SYSCALL_BASE @ check OS number
#else
str scno, [tsk, #TI_ABI_SYSCALL]
#endif #endif
get_thread_info tsk
/* /*
* Reload the registers that may have been corrupted on entry to * Reload the registers that may have been corrupted on entry to
* the syscall assembly (by tracing or context tracking.) * the syscall assembly (by tracing or context tracking.)
...@@ -288,7 +281,6 @@ ENDPROC(vector_swi) ...@@ -288,7 +281,6 @@ ENDPROC(vector_swi)
* context switches, and waiting for our parent to respond. * context switches, and waiting for our parent to respond.
*/ */
__sys_trace: __sys_trace:
mov r1, scno
add r0, sp, #S_OFF add r0, sp, #S_OFF
bl syscall_trace_enter bl syscall_trace_enter
mov scno, r0 mov scno, r0
......
...@@ -106,7 +106,7 @@ void __show_regs(struct pt_regs *regs) ...@@ -106,7 +106,7 @@ void __show_regs(struct pt_regs *regs)
unsigned long flags; unsigned long flags;
char buf[64]; char buf[64];
#ifndef CONFIG_CPU_V7M #ifndef CONFIG_CPU_V7M
unsigned int domain, fs; unsigned int domain;
#ifdef CONFIG_CPU_SW_DOMAIN_PAN #ifdef CONFIG_CPU_SW_DOMAIN_PAN
/* /*
* Get the domain register for the parent context. In user * Get the domain register for the parent context. In user
...@@ -115,14 +115,11 @@ void __show_regs(struct pt_regs *regs) ...@@ -115,14 +115,11 @@ void __show_regs(struct pt_regs *regs)
*/ */
if (user_mode(regs)) { if (user_mode(regs)) {
domain = DACR_UACCESS_ENABLE; domain = DACR_UACCESS_ENABLE;
fs = get_fs();
} else { } else {
domain = to_svc_pt_regs(regs)->dacr; domain = to_svc_pt_regs(regs)->dacr;
fs = to_svc_pt_regs(regs)->addr_limit;
} }
#else #else
domain = get_domain(); domain = get_domain();
fs = get_fs();
#endif #endif
#endif #endif
...@@ -158,8 +155,6 @@ void __show_regs(struct pt_regs *regs) ...@@ -158,8 +155,6 @@ void __show_regs(struct pt_regs *regs)
if ((domain & domain_mask(DOMAIN_USER)) == if ((domain & domain_mask(DOMAIN_USER)) ==
domain_val(DOMAIN_USER, DOMAIN_NOACCESS)) domain_val(DOMAIN_USER, DOMAIN_NOACCESS))
segment = "none"; segment = "none";
else if (fs == KERNEL_DS)
segment = "kernel";
else else
segment = "user"; segment = "user";
......
...@@ -25,6 +25,7 @@ ...@@ -25,6 +25,7 @@
#include <linux/tracehook.h> #include <linux/tracehook.h>
#include <linux/unistd.h> #include <linux/unistd.h>
#include <asm/syscall.h>
#include <asm/traps.h> #include <asm/traps.h>
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
...@@ -785,7 +786,8 @@ long arch_ptrace(struct task_struct *child, long request, ...@@ -785,7 +786,8 @@ long arch_ptrace(struct task_struct *child, long request,
break; break;
case PTRACE_SET_SYSCALL: case PTRACE_SET_SYSCALL:
task_thread_info(child)->syscall = data; task_thread_info(child)->abi_syscall = data &
__NR_SYSCALL_MASK;
ret = 0; ret = 0;
break; break;
...@@ -844,14 +846,14 @@ static void tracehook_report_syscall(struct pt_regs *regs, ...@@ -844,14 +846,14 @@ static void tracehook_report_syscall(struct pt_regs *regs,
if (dir == PTRACE_SYSCALL_EXIT) if (dir == PTRACE_SYSCALL_EXIT)
tracehook_report_syscall_exit(regs, 0); tracehook_report_syscall_exit(regs, 0);
else if (tracehook_report_syscall_entry(regs)) else if (tracehook_report_syscall_entry(regs))
current_thread_info()->syscall = -1; current_thread_info()->abi_syscall = -1;
regs->ARM_ip = ip; regs->ARM_ip = ip;
} }
asmlinkage int syscall_trace_enter(struct pt_regs *regs, int scno) asmlinkage int syscall_trace_enter(struct pt_regs *regs)
{ {
current_thread_info()->syscall = scno; int scno;
if (test_thread_flag(TIF_SYSCALL_TRACE)) if (test_thread_flag(TIF_SYSCALL_TRACE))
tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER); tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER);
...@@ -862,11 +864,11 @@ asmlinkage int syscall_trace_enter(struct pt_regs *regs, int scno) ...@@ -862,11 +864,11 @@ asmlinkage int syscall_trace_enter(struct pt_regs *regs, int scno)
return -1; return -1;
#else #else
/* XXX: remove this once OABI gets fixed */ /* XXX: remove this once OABI gets fixed */
secure_computing_strict(current_thread_info()->syscall); secure_computing_strict(syscall_get_nr(current, regs));
#endif #endif
/* Tracer or seccomp may have changed syscall. */ /* Tracer or seccomp may have changed syscall. */
scno = current_thread_info()->syscall; scno = syscall_get_nr(current, regs);
if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
trace_sys_enter(regs, scno); trace_sys_enter(regs, scno);
......
...@@ -669,14 +669,6 @@ struct page *get_signal_page(void) ...@@ -669,14 +669,6 @@ struct page *get_signal_page(void)
return page; return page;
} }
/* Defer to generic check */
asmlinkage void addr_limit_check_failed(void)
{
#ifdef CONFIG_MMU
addr_limit_user_check();
#endif
}
#ifdef CONFIG_DEBUG_RSEQ #ifdef CONFIG_DEBUG_RSEQ
asmlinkage void do_rseq_syscall(struct pt_regs *regs) asmlinkage void do_rseq_syscall(struct pt_regs *regs)
{ {
......
...@@ -80,9 +80,12 @@ ...@@ -80,9 +80,12 @@
#include <linux/socket.h> #include <linux/socket.h>
#include <linux/net.h> #include <linux/net.h>
#include <linux/ipc.h> #include <linux/ipc.h>
#include <linux/ipc_namespace.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <asm/syscall.h>
struct oldabi_stat64 { struct oldabi_stat64 {
unsigned long long st_dev; unsigned long long st_dev;
unsigned int __pad1; unsigned int __pad1;
...@@ -191,60 +194,87 @@ struct oabi_flock64 { ...@@ -191,60 +194,87 @@ struct oabi_flock64 {
pid_t l_pid; pid_t l_pid;
} __attribute__ ((packed,aligned(4))); } __attribute__ ((packed,aligned(4)));
static long do_locks(unsigned int fd, unsigned int cmd, static int get_oabi_flock(struct flock64 *kernel, struct oabi_flock64 __user *arg)
unsigned long arg)
{ {
struct flock64 kernel;
struct oabi_flock64 user; struct oabi_flock64 user;
mm_segment_t fs;
long ret;
if (copy_from_user(&user, (struct oabi_flock64 __user *)arg, if (copy_from_user(&user, (struct oabi_flock64 __user *)arg,
sizeof(user))) sizeof(user)))
return -EFAULT; return -EFAULT;
kernel.l_type = user.l_type;
kernel.l_whence = user.l_whence; kernel->l_type = user.l_type;
kernel.l_start = user.l_start; kernel->l_whence = user.l_whence;
kernel.l_len = user.l_len; kernel->l_start = user.l_start;
kernel.l_pid = user.l_pid; kernel->l_len = user.l_len;
kernel->l_pid = user.l_pid;
fs = get_fs();
set_fs(KERNEL_DS); return 0;
ret = sys_fcntl64(fd, cmd, (unsigned long)&kernel); }
set_fs(fs);
static int put_oabi_flock(struct flock64 *kernel, struct oabi_flock64 __user *arg)
if (!ret && (cmd == F_GETLK64 || cmd == F_OFD_GETLK)) { {
user.l_type = kernel.l_type; struct oabi_flock64 user;
user.l_whence = kernel.l_whence;
user.l_start = kernel.l_start; user.l_type = kernel->l_type;
user.l_len = kernel.l_len; user.l_whence = kernel->l_whence;
user.l_pid = kernel.l_pid; user.l_start = kernel->l_start;
if (copy_to_user((struct oabi_flock64 __user *)arg, user.l_len = kernel->l_len;
&user, sizeof(user))) user.l_pid = kernel->l_pid;
ret = -EFAULT;
} if (copy_to_user((struct oabi_flock64 __user *)arg,
return ret; &user, sizeof(user)))
return -EFAULT;
return 0;
} }
asmlinkage long sys_oabi_fcntl64(unsigned int fd, unsigned int cmd, asmlinkage long sys_oabi_fcntl64(unsigned int fd, unsigned int cmd,
unsigned long arg) unsigned long arg)
{ {
void __user *argp = (void __user *)arg;
struct fd f = fdget_raw(fd);
struct flock64 flock;
long err = -EBADF;
if (!f.file)
goto out;
switch (cmd) { switch (cmd) {
case F_OFD_GETLK:
case F_OFD_SETLK:
case F_OFD_SETLKW:
case F_GETLK64: case F_GETLK64:
case F_OFD_GETLK:
err = security_file_fcntl(f.file, cmd, arg);
if (err)
break;
err = get_oabi_flock(&flock, argp);
if (err)
break;
err = fcntl_getlk64(f.file, cmd, &flock);
if (!err)
err = put_oabi_flock(&flock, argp);
break;
case F_SETLK64: case F_SETLK64:
case F_SETLKW64: case F_SETLKW64:
return do_locks(fd, cmd, arg); case F_OFD_SETLK:
case F_OFD_SETLKW:
err = security_file_fcntl(f.file, cmd, arg);
if (err)
break;
err = get_oabi_flock(&flock, argp);
if (err)
break;
err = fcntl_setlk64(fd, f.file, cmd, &flock);
break;
default: default:
return sys_fcntl64(fd, cmd, arg); err = sys_fcntl64(fd, cmd, arg);
break;
} }
fdput(f);
out:
return err;
} }
struct oabi_epoll_event { struct oabi_epoll_event {
__u32 events; __poll_t events;
__u64 data; __u64 data;
} __attribute__ ((packed,aligned(4))); } __attribute__ ((packed,aligned(4)));
...@@ -264,55 +294,34 @@ asmlinkage long sys_oabi_epoll_ctl(int epfd, int op, int fd, ...@@ -264,55 +294,34 @@ asmlinkage long sys_oabi_epoll_ctl(int epfd, int op, int fd,
return do_epoll_ctl(epfd, op, fd, &kernel, false); return do_epoll_ctl(epfd, op, fd, &kernel, false);
} }
asmlinkage long sys_oabi_epoll_wait(int epfd,
struct oabi_epoll_event __user *events,
int maxevents, int timeout)
{
struct epoll_event *kbuf;
struct oabi_epoll_event e;
mm_segment_t fs;
long ret, err, i;
if (maxevents <= 0 ||
maxevents > (INT_MAX/sizeof(*kbuf)) ||
maxevents > (INT_MAX/sizeof(*events)))
return -EINVAL;
if (!access_ok(events, sizeof(*events) * maxevents))
return -EFAULT;
kbuf = kmalloc_array(maxevents, sizeof(*kbuf), GFP_KERNEL);
if (!kbuf)
return -ENOMEM;
fs = get_fs();
set_fs(KERNEL_DS);
ret = sys_epoll_wait(epfd, kbuf, maxevents, timeout);
set_fs(fs);
err = 0;
for (i = 0; i < ret; i++) {
e.events = kbuf[i].events;
e.data = kbuf[i].data;
err = __copy_to_user(events, &e, sizeof(e));
if (err)
break;
events++;
}
kfree(kbuf);
return err ? -EFAULT : ret;
}
#else #else
asmlinkage long sys_oabi_epoll_ctl(int epfd, int op, int fd, asmlinkage long sys_oabi_epoll_ctl(int epfd, int op, int fd,
struct oabi_epoll_event __user *event) struct oabi_epoll_event __user *event)
{ {
return -EINVAL; return -EINVAL;
} }
#endif
asmlinkage long sys_oabi_epoll_wait(int epfd, struct epoll_event __user *
struct oabi_epoll_event __user *events, epoll_put_uevent(__poll_t revents, __u64 data,
int maxevents, int timeout) struct epoll_event __user *uevent)
{ {
return -EINVAL; if (in_oabi_syscall()) {
struct oabi_epoll_event __user *oevent = (void __user *)uevent;
if (__put_user(revents, &oevent->events) ||
__put_user(data, &oevent->data))
return NULL;
return (void __user *)(oevent+1);
}
if (__put_user(revents, &uevent->events) ||
__put_user(data, &uevent->data))
return NULL;
return uevent+1;
} }
#endif
struct oabi_sembuf { struct oabi_sembuf {
unsigned short sem_num; unsigned short sem_num;
...@@ -321,46 +330,52 @@ struct oabi_sembuf { ...@@ -321,46 +330,52 @@ struct oabi_sembuf {
unsigned short __pad; unsigned short __pad;
}; };
#define sc_semopm sem_ctls[2]
#ifdef CONFIG_SYSVIPC
asmlinkage long sys_oabi_semtimedop(int semid, asmlinkage long sys_oabi_semtimedop(int semid,
struct oabi_sembuf __user *tsops, struct oabi_sembuf __user *tsops,
unsigned nsops, unsigned nsops,
const struct old_timespec32 __user *timeout) const struct old_timespec32 __user *timeout)
{ {
struct ipc_namespace *ns;
struct sembuf *sops; struct sembuf *sops;
struct old_timespec32 local_timeout;
long err; long err;
int i; int i;
ns = current->nsproxy->ipc_ns;
if (nsops > ns->sc_semopm)
return -E2BIG;
if (nsops < 1 || nsops > SEMOPM) if (nsops < 1 || nsops > SEMOPM)
return -EINVAL; return -EINVAL;
if (!access_ok(tsops, sizeof(*tsops) * nsops)) sops = kvmalloc_array(nsops, sizeof(*sops), GFP_KERNEL);
return -EFAULT;
sops = kmalloc_array(nsops, sizeof(*sops), GFP_KERNEL);
if (!sops) if (!sops)
return -ENOMEM; return -ENOMEM;
err = 0; err = 0;
for (i = 0; i < nsops; i++) { for (i = 0; i < nsops; i++) {
struct oabi_sembuf osb; struct oabi_sembuf osb;
err |= __copy_from_user(&osb, tsops, sizeof(osb)); err |= copy_from_user(&osb, tsops, sizeof(osb));
sops[i].sem_num = osb.sem_num; sops[i].sem_num = osb.sem_num;
sops[i].sem_op = osb.sem_op; sops[i].sem_op = osb.sem_op;
sops[i].sem_flg = osb.sem_flg; sops[i].sem_flg = osb.sem_flg;
tsops++; tsops++;
} }
if (timeout) {
/* copy this as well before changing domain protection */
err |= copy_from_user(&local_timeout, timeout, sizeof(*timeout));
timeout = &local_timeout;
}
if (err) { if (err) {
err = -EFAULT; err = -EFAULT;
} else { goto out;
mm_segment_t fs = get_fs();
set_fs(KERNEL_DS);
err = sys_semtimedop_time32(semid, sops, nsops, timeout);
set_fs(fs);
} }
kfree(sops);
if (timeout) {
struct timespec64 ts;
err = get_old_timespec32(&ts, timeout);
if (err)
goto out;
err = __do_semtimedop(semid, sops, nsops, &ts, ns);
goto out;
}
err = __do_semtimedop(semid, sops, nsops, NULL, ns);
out:
kvfree(sops);
return err; return err;
} }
...@@ -387,6 +402,27 @@ asmlinkage int sys_oabi_ipc(uint call, int first, int second, int third, ...@@ -387,6 +402,27 @@ asmlinkage int sys_oabi_ipc(uint call, int first, int second, int third,
return sys_ipc(call, first, second, third, ptr, fifth); return sys_ipc(call, first, second, third, ptr, fifth);
} }
} }
#else
asmlinkage long sys_oabi_semtimedop(int semid,
struct oabi_sembuf __user *tsops,
unsigned nsops,
const struct old_timespec32 __user *timeout)
{
return -ENOSYS;
}
asmlinkage long sys_oabi_semop(int semid, struct oabi_sembuf __user *tsops,
unsigned nsops)
{
return -ENOSYS;
}
asmlinkage int sys_oabi_ipc(uint call, int first, int second, int third,
void __user *ptr, long fifth)
{
return -ENOSYS;
}
#endif
asmlinkage long sys_oabi_bind(int fd, struct sockaddr __user *addr, int addrlen) asmlinkage long sys_oabi_bind(int fd, struct sockaddr __user *addr, int addrlen)
{ {
......
...@@ -122,17 +122,8 @@ static void dump_mem(const char *lvl, const char *str, unsigned long bottom, ...@@ -122,17 +122,8 @@ static void dump_mem(const char *lvl, const char *str, unsigned long bottom,
unsigned long top) unsigned long top)
{ {
unsigned long first; unsigned long first;
mm_segment_t fs;
int i; int i;
/*
* We need to switch to kernel mode so that we can use __get_user
* to safely read from kernel space. Note that we now dump the
* code first, just in case the backtrace kills us.
*/
fs = get_fs();
set_fs(KERNEL_DS);
printk("%s%s(0x%08lx to 0x%08lx)\n", lvl, str, bottom, top); printk("%s%s(0x%08lx to 0x%08lx)\n", lvl, str, bottom, top);
for (first = bottom & ~31; first < top; first += 32) { for (first = bottom & ~31; first < top; first += 32) {
...@@ -145,7 +136,7 @@ static void dump_mem(const char *lvl, const char *str, unsigned long bottom, ...@@ -145,7 +136,7 @@ static void dump_mem(const char *lvl, const char *str, unsigned long bottom,
for (p = first, i = 0; i < 8 && p < top; i++, p += 4) { for (p = first, i = 0; i < 8 && p < top; i++, p += 4) {
if (p >= bottom && p < top) { if (p >= bottom && p < top) {
unsigned long val; unsigned long val;
if (__get_user(val, (unsigned long *)p) == 0) if (get_kernel_nofault(val, (unsigned long *)p))
sprintf(str + i * 9, " %08lx", val); sprintf(str + i * 9, " %08lx", val);
else else
sprintf(str + i * 9, " ????????"); sprintf(str + i * 9, " ????????");
...@@ -153,11 +144,9 @@ static void dump_mem(const char *lvl, const char *str, unsigned long bottom, ...@@ -153,11 +144,9 @@ static void dump_mem(const char *lvl, const char *str, unsigned long bottom,
} }
printk("%s%04lx:%s\n", lvl, first & 0xffff, str); printk("%s%04lx:%s\n", lvl, first & 0xffff, str);
} }
set_fs(fs);
} }
static void __dump_instr(const char *lvl, struct pt_regs *regs) static void dump_instr(const char *lvl, struct pt_regs *regs)
{ {
unsigned long addr = instruction_pointer(regs); unsigned long addr = instruction_pointer(regs);
const int thumb = thumb_mode(regs); const int thumb = thumb_mode(regs);
...@@ -173,10 +162,20 @@ static void __dump_instr(const char *lvl, struct pt_regs *regs) ...@@ -173,10 +162,20 @@ static void __dump_instr(const char *lvl, struct pt_regs *regs)
for (i = -4; i < 1 + !!thumb; i++) { for (i = -4; i < 1 + !!thumb; i++) {
unsigned int val, bad; unsigned int val, bad;
if (thumb) if (!user_mode(regs)) {
bad = get_user(val, &((u16 *)addr)[i]); if (thumb) {
else u16 val16;
bad = get_user(val, &((u32 *)addr)[i]); bad = get_kernel_nofault(val16, &((u16 *)addr)[i]);
val = val16;
} else {
bad = get_kernel_nofault(val, &((u32 *)addr)[i]);
}
} else {
if (thumb)
bad = get_user(val, &((u16 *)addr)[i]);
else
bad = get_user(val, &((u32 *)addr)[i]);
}
if (!bad) if (!bad)
p += sprintf(p, i == 0 ? "(%0*x) " : "%0*x ", p += sprintf(p, i == 0 ? "(%0*x) " : "%0*x ",
...@@ -189,20 +188,6 @@ static void __dump_instr(const char *lvl, struct pt_regs *regs) ...@@ -189,20 +188,6 @@ static void __dump_instr(const char *lvl, struct pt_regs *regs)
printk("%sCode: %s\n", lvl, str); printk("%sCode: %s\n", lvl, str);
} }
static void dump_instr(const char *lvl, struct pt_regs *regs)
{
mm_segment_t fs;
if (!user_mode(regs)) {
fs = get_fs();
set_fs(KERNEL_DS);
__dump_instr(lvl, regs);
set_fs(fs);
} else {
__dump_instr(lvl, regs);
}
}
#ifdef CONFIG_ARM_UNWIND #ifdef CONFIG_ARM_UNWIND
static inline void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, static inline void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
const char *loglvl) const char *loglvl)
......
...@@ -109,8 +109,7 @@ ...@@ -109,8 +109,7 @@
ENTRY(arm_copy_from_user) ENTRY(arm_copy_from_user)
#ifdef CONFIG_CPU_SPECTRE #ifdef CONFIG_CPU_SPECTRE
get_thread_info r3 ldr r3, =TASK_SIZE
ldr r3, [r3, #TI_ADDR_LIMIT]
uaccess_mask_range_ptr r1, r2, r3, ip uaccess_mask_range_ptr r1, r2, r3, ip
#endif #endif
......
...@@ -109,8 +109,7 @@ ...@@ -109,8 +109,7 @@
ENTRY(__copy_to_user_std) ENTRY(__copy_to_user_std)
WEAK(arm_copy_to_user) WEAK(arm_copy_to_user)
#ifdef CONFIG_CPU_SPECTRE #ifdef CONFIG_CPU_SPECTRE
get_thread_info r3 ldr r3, =TASK_SIZE
ldr r3, [r3, #TI_ADDR_LIMIT]
uaccess_mask_range_ptr r0, r2, r3, ip uaccess_mask_range_ptr r0, r2, r3, ip
#endif #endif
......
...@@ -266,7 +266,7 @@ ...@@ -266,7 +266,7 @@
249 common lookup_dcookie sys_lookup_dcookie 249 common lookup_dcookie sys_lookup_dcookie
250 common epoll_create sys_epoll_create 250 common epoll_create sys_epoll_create
251 common epoll_ctl sys_epoll_ctl sys_oabi_epoll_ctl 251 common epoll_ctl sys_epoll_ctl sys_oabi_epoll_ctl
252 common epoll_wait sys_epoll_wait sys_oabi_epoll_wait 252 common epoll_wait sys_epoll_wait
253 common remap_file_pages sys_remap_file_pages 253 common remap_file_pages sys_remap_file_pages
# 254 for set_thread_area # 254 for set_thread_area
# 255 for get_thread_area # 255 for get_thread_area
......
...@@ -1686,8 +1686,8 @@ static int ep_send_events(struct eventpoll *ep, ...@@ -1686,8 +1686,8 @@ static int ep_send_events(struct eventpoll *ep,
if (!revents) if (!revents)
continue; continue;
if (__put_user(revents, &events->events) || events = epoll_put_uevent(revents, epi->event.data, events);
__put_user(epi->event.data, &events->data)) { if (!events) {
list_add(&epi->rdllink, &txlist); list_add(&epi->rdllink, &txlist);
ep_pm_stay_awake(epi); ep_pm_stay_awake(epi);
if (!res) if (!res)
...@@ -1695,7 +1695,6 @@ static int ep_send_events(struct eventpoll *ep, ...@@ -1695,7 +1695,6 @@ static int ep_send_events(struct eventpoll *ep,
break; break;
} }
res++; res++;
events++;
if (epi->event.events & EPOLLONESHOT) if (epi->event.events & EPOLLONESHOT)
epi->event.events &= EP_PRIVATE_BITS; epi->event.events &= EP_PRIVATE_BITS;
else if (!(epi->event.events & EPOLLET)) { else if (!(epi->event.events & EPOLLET)) {
......
...@@ -57,17 +57,11 @@ ...@@ -57,17 +57,11 @@
/* /*
* If the divisor happens to be constant, we determine the appropriate * If the divisor happens to be constant, we determine the appropriate
* inverse at compile time to turn the division into a few inline * inverse at compile time to turn the division into a few inline
* multiplications which ought to be much faster. And yet only if compiling * multiplications which ought to be much faster.
* with a sufficiently recent gcc version to perform proper 64-bit constant
* propagation.
* *
* (It is unfortunate that gcc doesn't perform all this internally.) * (It is unfortunate that gcc doesn't perform all this internally.)
*/ */
#ifndef __div64_const32_is_OK
#define __div64_const32_is_OK (__GNUC__ >= 4)
#endif
#define __div64_const32(n, ___b) \ #define __div64_const32(n, ___b) \
({ \ ({ \
/* \ /* \
...@@ -230,8 +224,7 @@ extern uint32_t __div64_32(uint64_t *dividend, uint32_t divisor); ...@@ -230,8 +224,7 @@ extern uint32_t __div64_32(uint64_t *dividend, uint32_t divisor);
is_power_of_2(__base)) { \ is_power_of_2(__base)) { \
__rem = (n) & (__base - 1); \ __rem = (n) & (__base - 1); \
(n) >>= ilog2(__base); \ (n) >>= ilog2(__base); \
} else if (__div64_const32_is_OK && \ } else if (__builtin_constant_p(__base) && \
__builtin_constant_p(__base) && \
__base != 0) { \ __base != 0) { \
uint32_t __res_lo, __n_lo = (n); \ uint32_t __res_lo, __n_lo = (n); \
(n) = __div64_const32(n, __base); \ (n) = __div64_const32(n, __base); \
...@@ -241,8 +234,9 @@ extern uint32_t __div64_32(uint64_t *dividend, uint32_t divisor); ...@@ -241,8 +234,9 @@ extern uint32_t __div64_32(uint64_t *dividend, uint32_t divisor);
} else if (likely(((n) >> 32) == 0)) { \ } else if (likely(((n) >> 32) == 0)) { \
__rem = (uint32_t)(n) % __base; \ __rem = (uint32_t)(n) % __base; \
(n) = (uint32_t)(n) / __base; \ (n) = (uint32_t)(n) / __base; \
} else \ } else { \
__rem = __div64_32(&(n), __base); \ __rem = __div64_32(&(n), __base); \
} \
__rem; \ __rem; \
}) })
......
...@@ -68,4 +68,22 @@ static inline void eventpoll_release(struct file *file) {} ...@@ -68,4 +68,22 @@ static inline void eventpoll_release(struct file *file) {}
#endif #endif
#if defined(CONFIG_ARM) && defined(CONFIG_OABI_COMPAT)
/* ARM OABI has an incompatible struct layout and needs a special handler */
extern struct epoll_event __user *
epoll_put_uevent(__poll_t revents, __u64 data,
struct epoll_event __user *uevent);
#else
static inline struct epoll_event __user *
epoll_put_uevent(__poll_t revents, __u64 data,
struct epoll_event __user *uevent)
{
if (__put_user(revents, &uevent->events) ||
__put_user(data, &uevent->data))
return NULL;
return uevent+1;
}
#endif
#endif /* #ifndef _LINUX_EVENTPOLL_H */ #endif /* #ifndef _LINUX_EVENTPOLL_H */
...@@ -1373,6 +1373,9 @@ long ksys_old_shmctl(int shmid, int cmd, struct shmid_ds __user *buf); ...@@ -1373,6 +1373,9 @@ long ksys_old_shmctl(int shmid, int cmd, struct shmid_ds __user *buf);
long compat_ksys_semtimedop(int semid, struct sembuf __user *tsems, long compat_ksys_semtimedop(int semid, struct sembuf __user *tsems,
unsigned int nsops, unsigned int nsops,
const struct old_timespec32 __user *timeout); const struct old_timespec32 __user *timeout);
long __do_semtimedop(int semid, struct sembuf *tsems, unsigned int nsops,
const struct timespec64 *timeout,
struct ipc_namespace *ns);
int __sys_getsockopt(int fd, int level, int optname, char __user *optval, int __sys_getsockopt(int fd, int level, int optname, char __user *optval,
int __user *optlen); int __user *optlen);
......
...@@ -1984,47 +1984,34 @@ static struct sem_undo *find_alloc_undo(struct ipc_namespace *ns, int semid) ...@@ -1984,47 +1984,34 @@ static struct sem_undo *find_alloc_undo(struct ipc_namespace *ns, int semid)
return un; return un;
} }
static long do_semtimedop(int semid, struct sembuf __user *tsops, long __do_semtimedop(int semid, struct sembuf *sops,
unsigned nsops, const struct timespec64 *timeout) unsigned nsops, const struct timespec64 *timeout,
struct ipc_namespace *ns)
{ {
int error = -EINVAL; int error = -EINVAL;
struct sem_array *sma; struct sem_array *sma;
struct sembuf fast_sops[SEMOPM_FAST]; struct sembuf *sop;
struct sembuf *sops = fast_sops, *sop;
struct sem_undo *un; struct sem_undo *un;
int max, locknum; int max, locknum;
bool undos = false, alter = false, dupsop = false; bool undos = false, alter = false, dupsop = false;
struct sem_queue queue; struct sem_queue queue;
unsigned long dup = 0, jiffies_left = 0; unsigned long dup = 0, jiffies_left = 0;
struct ipc_namespace *ns;
ns = current->nsproxy->ipc_ns;
if (nsops < 1 || semid < 0) if (nsops < 1 || semid < 0)
return -EINVAL; return -EINVAL;
if (nsops > ns->sc_semopm) if (nsops > ns->sc_semopm)
return -E2BIG; return -E2BIG;
if (nsops > SEMOPM_FAST) {
sops = kvmalloc_array(nsops, sizeof(*sops),
GFP_KERNEL_ACCOUNT);
if (sops == NULL)
return -ENOMEM;
}
if (copy_from_user(sops, tsops, nsops * sizeof(*tsops))) {
error = -EFAULT;
goto out_free;
}
if (timeout) { if (timeout) {
if (timeout->tv_sec < 0 || timeout->tv_nsec < 0 || if (timeout->tv_sec < 0 || timeout->tv_nsec < 0 ||
timeout->tv_nsec >= 1000000000L) { timeout->tv_nsec >= 1000000000L) {
error = -EINVAL; error = -EINVAL;
goto out_free; goto out;
} }
jiffies_left = timespec64_to_jiffies(timeout); jiffies_left = timespec64_to_jiffies(timeout);
} }
max = 0; max = 0;
for (sop = sops; sop < sops + nsops; sop++) { for (sop = sops; sop < sops + nsops; sop++) {
unsigned long mask = 1ULL << ((sop->sem_num) % BITS_PER_LONG); unsigned long mask = 1ULL << ((sop->sem_num) % BITS_PER_LONG);
...@@ -2053,7 +2040,7 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops, ...@@ -2053,7 +2040,7 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops,
un = find_alloc_undo(ns, semid); un = find_alloc_undo(ns, semid);
if (IS_ERR(un)) { if (IS_ERR(un)) {
error = PTR_ERR(un); error = PTR_ERR(un);
goto out_free; goto out;
} }
} else { } else {
un = NULL; un = NULL;
...@@ -2064,25 +2051,25 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops, ...@@ -2064,25 +2051,25 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops,
if (IS_ERR(sma)) { if (IS_ERR(sma)) {
rcu_read_unlock(); rcu_read_unlock();
error = PTR_ERR(sma); error = PTR_ERR(sma);
goto out_free; goto out;
} }
error = -EFBIG; error = -EFBIG;
if (max >= sma->sem_nsems) { if (max >= sma->sem_nsems) {
rcu_read_unlock(); rcu_read_unlock();
goto out_free; goto out;
} }
error = -EACCES; error = -EACCES;
if (ipcperms(ns, &sma->sem_perm, alter ? S_IWUGO : S_IRUGO)) { if (ipcperms(ns, &sma->sem_perm, alter ? S_IWUGO : S_IRUGO)) {
rcu_read_unlock(); rcu_read_unlock();
goto out_free; goto out;
} }
error = security_sem_semop(&sma->sem_perm, sops, nsops, alter); error = security_sem_semop(&sma->sem_perm, sops, nsops, alter);
if (error) { if (error) {
rcu_read_unlock(); rcu_read_unlock();
goto out_free; goto out;
} }
error = -EIDRM; error = -EIDRM;
...@@ -2096,7 +2083,7 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops, ...@@ -2096,7 +2083,7 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops,
* entangled here and why it's RMID race safe on comments at sem_lock() * entangled here and why it's RMID race safe on comments at sem_lock()
*/ */
if (!ipc_valid_object(&sma->sem_perm)) if (!ipc_valid_object(&sma->sem_perm))
goto out_unlock_free; goto out_unlock;
/* /*
* semid identifiers are not unique - find_alloc_undo may have * semid identifiers are not unique - find_alloc_undo may have
* allocated an undo structure, it was invalidated by an RMID * allocated an undo structure, it was invalidated by an RMID
...@@ -2105,7 +2092,7 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops, ...@@ -2105,7 +2092,7 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops,
* "un" itself is guaranteed by rcu. * "un" itself is guaranteed by rcu.
*/ */
if (un && un->semid == -1) if (un && un->semid == -1)
goto out_unlock_free; goto out_unlock;
queue.sops = sops; queue.sops = sops;
queue.nsops = nsops; queue.nsops = nsops;
...@@ -2131,10 +2118,10 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops, ...@@ -2131,10 +2118,10 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops,
rcu_read_unlock(); rcu_read_unlock();
wake_up_q(&wake_q); wake_up_q(&wake_q);
goto out_free; goto out;
} }
if (error < 0) /* non-blocking error path */ if (error < 0) /* non-blocking error path */
goto out_unlock_free; goto out_unlock;
/* /*
* We need to sleep on this operation, so we put the current * We need to sleep on this operation, so we put the current
...@@ -2199,14 +2186,14 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops, ...@@ -2199,14 +2186,14 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops,
if (error != -EINTR) { if (error != -EINTR) {
/* see SEM_BARRIER_2 for purpose/pairing */ /* see SEM_BARRIER_2 for purpose/pairing */
smp_acquire__after_ctrl_dep(); smp_acquire__after_ctrl_dep();
goto out_free; goto out;
} }
rcu_read_lock(); rcu_read_lock();
locknum = sem_lock(sma, sops, nsops); locknum = sem_lock(sma, sops, nsops);
if (!ipc_valid_object(&sma->sem_perm)) if (!ipc_valid_object(&sma->sem_perm))
goto out_unlock_free; goto out_unlock;
/* /*
* No necessity for any barrier: We are protect by sem_lock() * No necessity for any barrier: We are protect by sem_lock()
...@@ -2218,7 +2205,7 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops, ...@@ -2218,7 +2205,7 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops,
* Leave without unlink_queue(), but with sem_unlock(). * Leave without unlink_queue(), but with sem_unlock().
*/ */
if (error != -EINTR) if (error != -EINTR)
goto out_unlock_free; goto out_unlock;
/* /*
* If an interrupt occurred we have to clean up the queue. * If an interrupt occurred we have to clean up the queue.
...@@ -2229,13 +2216,45 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops, ...@@ -2229,13 +2216,45 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops,
unlink_queue(sma, &queue); unlink_queue(sma, &queue);
out_unlock_free: out_unlock:
sem_unlock(sma, locknum); sem_unlock(sma, locknum);
rcu_read_unlock(); rcu_read_unlock();
out:
return error;
}
static long do_semtimedop(int semid, struct sembuf __user *tsops,
unsigned nsops, const struct timespec64 *timeout)
{
struct sembuf fast_sops[SEMOPM_FAST];
struct sembuf *sops = fast_sops;
struct ipc_namespace *ns;
int ret;
ns = current->nsproxy->ipc_ns;
if (nsops > ns->sc_semopm)
return -E2BIG;
if (nsops < 1)
return -EINVAL;
if (nsops > SEMOPM_FAST) {
sops = kvmalloc_array(nsops, sizeof(*sops), GFP_KERNEL_ACCOUNT);
if (sops == NULL)
return -ENOMEM;
}
if (copy_from_user(sops, tsops, nsops * sizeof(*tsops))) {
ret = -EFAULT;
goto out_free;
}
ret = __do_semtimedop(semid, sops, nsops, timeout, ns);
out_free: out_free:
if (sops != fast_sops) if (sops != fast_sops)
kvfree(sops); kvfree(sops);
return error;
return ret;
} }
long ksys_semtimedop(int semid, struct sembuf __user *tsops, long ksys_semtimedop(int semid, struct sembuf __user *tsops,
......
...@@ -24,13 +24,21 @@ bool __weak copy_from_kernel_nofault_allowed(const void *unsafe_src, ...@@ -24,13 +24,21 @@ bool __weak copy_from_kernel_nofault_allowed(const void *unsafe_src,
long copy_from_kernel_nofault(void *dst, const void *src, size_t size) long copy_from_kernel_nofault(void *dst, const void *src, size_t size)
{ {
unsigned long align = 0;
if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS))
align = (unsigned long)dst | (unsigned long)src;
if (!copy_from_kernel_nofault_allowed(src, size)) if (!copy_from_kernel_nofault_allowed(src, size))
return -ERANGE; return -ERANGE;
pagefault_disable(); pagefault_disable();
copy_from_kernel_nofault_loop(dst, src, size, u64, Efault); if (!(align & 7))
copy_from_kernel_nofault_loop(dst, src, size, u32, Efault); copy_from_kernel_nofault_loop(dst, src, size, u64, Efault);
copy_from_kernel_nofault_loop(dst, src, size, u16, Efault); if (!(align & 3))
copy_from_kernel_nofault_loop(dst, src, size, u32, Efault);
if (!(align & 1))
copy_from_kernel_nofault_loop(dst, src, size, u16, Efault);
copy_from_kernel_nofault_loop(dst, src, size, u8, Efault); copy_from_kernel_nofault_loop(dst, src, size, u8, Efault);
pagefault_enable(); pagefault_enable();
return 0; return 0;
...@@ -50,10 +58,18 @@ EXPORT_SYMBOL_GPL(copy_from_kernel_nofault); ...@@ -50,10 +58,18 @@ EXPORT_SYMBOL_GPL(copy_from_kernel_nofault);
long copy_to_kernel_nofault(void *dst, const void *src, size_t size) long copy_to_kernel_nofault(void *dst, const void *src, size_t size)
{ {
unsigned long align = 0;
if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS))
align = (unsigned long)dst | (unsigned long)src;
pagefault_disable(); pagefault_disable();
copy_to_kernel_nofault_loop(dst, src, size, u64, Efault); if (!(align & 7))
copy_to_kernel_nofault_loop(dst, src, size, u32, Efault); copy_to_kernel_nofault_loop(dst, src, size, u64, Efault);
copy_to_kernel_nofault_loop(dst, src, size, u16, Efault); if (!(align & 3))
copy_to_kernel_nofault_loop(dst, src, size, u32, Efault);
if (!(align & 1))
copy_to_kernel_nofault_loop(dst, src, size, u16, Efault);
copy_to_kernel_nofault_loop(dst, src, size, u8, Efault); copy_to_kernel_nofault_loop(dst, src, size, u8, Efault);
pagefault_enable(); pagefault_enable();
return 0; return 0;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment