Commit 726eb70e authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'char-misc-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
 "Here is the big set of char, misc, and other assorted driver subsystem
  patches for 5.10-rc1.

  There's a lot of different things in here, all over the drivers/
  directory. Some summaries:

   - soundwire driver updates

   - habanalabs driver updates

   - extcon driver updates

   - nitro_enclaves new driver

   - fsl-mc driver and core updates

   - mhi core and bus updates

   - nvmem driver updates

   - eeprom driver updates

   - binder driver updates and fixes

   - vbox minor bugfixes

   - fsi driver updates

   - w1 driver updates

   - coresight driver updates

   - interconnect driver updates

   - misc driver updates

   - other minor driver updates

  All of these have been in linux-next for a while with no reported
  issues"

* tag 'char-misc-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (396 commits)
  binder: fix UAF when releasing todo list
  docs: w1: w1_therm: Fix broken xref, mistakes, clarify text
  misc: Kconfig: fix a HISI_HIKEY_USB dependency
  LSM: Fix type of id parameter in kernel_post_load_data prototype
  misc: Kconfig: add a new dependency for HISI_HIKEY_USB
  firmware_loader: fix a kernel-doc markup
  w1: w1_therm: make w1_poll_completion static
  binder: simplify the return expression of binder_mmap
  test_firmware: Test partial read support
  firmware: Add request_partial_firmware_into_buf()
  firmware: Store opt_flags in fw_priv
  fs/kernel_file_read: Add "offset" arg for partial reads
  IMA: Add support for file reads without contents
  LSM: Add "contents" flag to kernel_read_file hook
  module: Call security_kernel_post_load_data()
  firmware_loader: Use security_post_load_data()
  LSM: Introduce kernel_post_load_data() hook
  fs/kernel_read_file: Add file_size output argument
  fs/kernel_read_file: Switch buffer size arg to size_t
  fs/kernel_read_file: Remove redundant size argument
  ...
parents c6dbef73 f3277cbf
What: /sys/bus/mhi/devices/.../serialnumber
Date: Sept 2020
KernelVersion: 5.10
Contact: Bhaumik Bhatt <bbhatt@codeaurora.org>
Description: The file holds the serial number of the client device obtained
using a BHI (Boot Host Interface) register read after at least
one attempt to power up the device has been done. If read
without having the device power on at least once, the file will
read all 0's.
Users: Any userspace application or clients interested in device info.
What: /sys/bus/mhi/devices/.../oem_pk_hash
Date: Sept 2020
KernelVersion: 5.10
Contact: Bhaumik Bhatt <bbhatt@codeaurora.org>
Description: The file holds the OEM PK Hash value of the endpoint device
obtained using a BHI (Boot Host Interface) register read after
at least one attempt to power up the device has been done. If
read without having the device power on at least once, the file
will read all 0's.
Users: Any userspace application or clients interested in device info.
What: /sys/bus/dfl/devices/dfl_dev.X/type
Date: Aug 2020
KernelVersion: 5.10
Contact: Xu Yilun <yilun.xu@intel.com>
Description: Read-only. It returns type of DFL FIU of the device. Now DFL
supports 2 FIU types, 0 for FME, 1 for PORT.
Format: 0x%x
What: /sys/bus/dfl/devices/dfl_dev.X/feature_id
Date: Aug 2020
KernelVersion: 5.10
Contact: Xu Yilun <yilun.xu@intel.com>
Description: Read-only. It returns feature identifier local to its DFL FIU
type.
Format: 0x%x
......@@ -36,3 +36,11 @@ Contact: linux-fsi@lists.ozlabs.org
Description:
Provides a means of reading/writing a 32 bit value from/to a
specified FSI bus address.
What: /sys/bus/platform/devices/../cfam_reset
Date: Sept 2020
KernelVersion: 5.10
Contact: linux-fsi@lists.ozlabs.org
Description:
Provides a means of resetting the cfam that is attached to the
FSI device.
......@@ -41,6 +41,13 @@ Contact: Tomas Winkler <tomas.winkler@intel.com>
Description: Stores mei client fixed address, if any
Format: %d
What: /sys/bus/mei/devices/.../vtag
Date: Nov 2020
KernelVersion: 5.9
Contact: Tomas Winkler <tomas.winkler@intel.com>
Description: Stores mei client vtag support status
Format: %d
What: /sys/bus/mei/devices/.../max_len
Date: Nov 2019
KernelVersion: 5.5
......
What: /sys/bus/soundwire/devices/sdw:.../status
/sys/bus/soundwire/devices/sdw:.../device_number
Date: September 2020
Contact: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
Bard Liao <yung-chuan.liao@linux.intel.com>
Vinod Koul <vkoul@kernel.org>
Description: SoundWire Slave status
These properties report the Slave status, e.g. if it
is UNATTACHED or not, and in the latter case show the
device_number. This status information is useful to
detect devices exposed by platform firmware but not
physically present on the bus, and conversely devices
not exposed in platform firmware but enumerated.
What: /sys/bus/soundwire/devices/sdw:.../dev-properties/mipi_revision
/sys/bus/soundwire/devices/sdw:.../dev-properties/wake_capable
/sys/bus/soundwire/devices/sdw:.../dev-properties/test_mode_capable
......
......@@ -2,13 +2,17 @@ What: /sys/class/habanalabs/hl<n>/armcp_kernel_ver
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Version of the Linux kernel running on the device's CPU
Description: Version of the Linux kernel running on the device's CPU.
Will be DEPRECATED in Linux kernel version 5.10, and be
replaced with cpucp_kernel_ver
What: /sys/class/habanalabs/hl<n>/armcp_ver
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Version of the application running on the device's CPU
Will be DEPRECATED in Linux kernel version 5.10, and be
replaced with cpucp_ver
What: /sys/class/habanalabs/hl<n>/clk_max_freq_mhz
Date: Jun 2019
......@@ -33,6 +37,18 @@ KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Version of the Device's CPLD F/W
What: /sys/class/habanalabs/hl<n>/cpucp_kernel_ver
Date: Oct 2020
KernelVersion: 5.10
Contact: oded.gabbay@gmail.com
Description: Version of the Linux kernel running on the device's CPU
What: /sys/class/habanalabs/hl<n>/cpucp_ver
Date: Oct 2020
KernelVersion: 5.10
Contact: oded.gabbay@gmail.com
Description: Version of the application running on the device's CPU
What: /sys/class/habanalabs/hl<n>/device_type
Date: Jan 2019
KernelVersion: 5.1
......
......@@ -49,10 +49,13 @@ Description:
will be changed only in device RAM, so it will be cleared when
power is lost. Trigger a 'save' to EEPROM command to keep
values after power-on. Read or write are :
* '9..12': device resolution in bit
* '9..14': device resolution in bit
or resolution to set in bit
* '-xx': xx is kernel error when reading the resolution
* Anything else: do nothing
Some DS18B20 clones are fixed in 12-bit resolution, so the
actual resolution is read back from the chip and verified. Error
is reported if the results differ.
Users: any user space application which wants to communicate with
w1_term device
......@@ -86,7 +89,7 @@ Description:
*write* :
* '0' : save the 2 or 3 bytes to the device EEPROM
(i.e. TH, TL and config register)
* '9..12' : set the device resolution in RAM
* '9..14' : set the device resolution in RAM
(if supported)
* Anything else: do nothing
refer to Documentation/w1/slaves/w1_therm.rst for detailed
......@@ -114,3 +117,47 @@ Description:
of the bulk read command (not the current temperature).
Users: any user space application which wants to communicate with
w1_term device
What: /sys/bus/w1/devices/.../conv_time
Date: July 2020
Contact: Ivan Zaentsev <ivan.zaentsev@wirenboard.ru>
Description:
(RW) Get, set, or measure a temperature conversion time. The
setting remains active until a resolution change. Then it is
reset to default (datasheet) conversion time for a new
resolution.
*read*: Actual conversion time in milliseconds. *write*:
'0': Set the default conversion time from the datasheet.
'1': Measure and set the conversion time. Make a single
temperature conversion, measure an actual value.
Increase it by 20% for temperature range. A new
conversion time can be obtained by reading this
same attribute.
other positive value:
Set the conversion time in milliseconds.
Users: An application using the w1_term device
What: /sys/bus/w1/devices/.../features
Date: July 2020
Contact: Ivan Zaentsev <ivan.zaentsev@wirenboard.ru>
Description:
(RW) Control optional driver settings.
Bit masks to read/write (bitwise OR):
1: Enable check for conversion success. If byte 6 of
scratchpad memory is 0xC after conversion, and
temperature reads 85.00 (powerup value) or 127.94
(insufficient power) - return a conversion error.
2: Enable poll for conversion completion. Generate read cycles
after the conversion start and wait for 1's. In parasite
power mode this feature is not available.
*read*: Currently selected features.
*write*: Select features.
Users: An application using the w1_term device
* PTN5150 CC (Configuration Channel) Logic device
PTN5150 is a small thin low power CC logic chip supporting the USB Type-C
connector application with CC control logic detection and indication functions.
It is interfaced to the host controller using an I2C interface.
Required properties:
- compatible: should be "nxp,ptn5150"
- reg: specifies the I2C slave address of the device
- int-gpio: should contain a phandle and GPIO specifier for the GPIO pin
connected to the PTN5150's INTB pin.
- vbus-gpio: should contain a phandle and GPIO specifier for the GPIO pin which
is used to control VBUS.
- pinctrl-names : a pinctrl state named "default" must be defined.
- pinctrl-0 : phandle referencing pin configuration of interrupt and vbus
control.
Example:
ptn5150@1d {
compatible = "nxp,ptn5150";
reg = <0x1d>;
int-gpio = <&msmgpio 78 GPIO_ACTIVE_HIGH>;
vbus-gpio = <&msmgpio 148 GPIO_ACTIVE_HIGH>;
pinctrl-names = "default";
pinctrl-0 = <&ptn5150_default>;
status = "okay";
};
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/extcon/extcon-ptn5150.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: PTN5150 CC (Configuration Channel) Logic device
maintainers:
- Krzysztof Kozlowski <krzk@kernel.org>
description: |
PTN5150 is a small thin low power CC logic chip supporting the USB Type-C
connector application with CC control logic detection and indication
functions. It is interfaced to the host controller using an I2C interface.
properties:
compatible:
const: nxp,ptn5150
int-gpios:
deprecated: true
description:
GPIO pin (input) connected to the PTN5150's INTB pin.
Use "interrupts" instead.
interrupts:
maxItems: 1
reg:
maxItems: 1
vbus-gpios:
description:
GPIO pin (output) used to control VBUS. If skipped, no such control
takes place.
required:
- compatible
- interrupts
- reg
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/irq.h>
i2c {
#address-cells = <1>;
#size-cells = <0>;
ptn5150@1d {
compatible = "nxp,ptn5150";
reg = <0x1d>;
interrupt-parent = <&msmgpio>;
interrupts = <78 IRQ_TYPE_LEVEL_HIGH>;
vbus-gpios = <&msmgpio 148 GPIO_ACTIVE_HIGH>;
};
};
......@@ -12,6 +12,13 @@ Required properties:
- pinctrl-0: phandle to pinctrl node
- pinctrl-names: pinctrl state
Optional properties:
- cfam-reset-gpios: GPIO for CFAM reset
- fsi-routing-gpios: GPIO for setting the FSI mux (internal or cabled)
- fsi-mux-gpios: GPIO for detecting the desired FSI mux state
Examples:
fsi-master {
......@@ -21,4 +28,9 @@ Examples:
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_fsi1_default>;
clocks = <&syscon ASPEED_CLK_GATE_FSICLK>;
fsi-routing-gpios = <&gpio0 ASPEED_GPIO(Q, 7) GPIO_ACTIVE_HIGH>;
fsi-mux-gpios = <&gpio0 ASPEED_GPIO(B, 0) GPIO_ACTIVE_HIGH>;
cfam-reset-gpios = <&gpio0 ASPEED_GPIO(Q, 0) GPIO_ACTIVE_LOW>;
};
......@@ -19,7 +19,8 @@ directly.
Required properties:
- compatible : contains the interconnect provider compatible string
- #interconnect-cells : number of cells in a interconnect specifier needed to
encode the interconnect node id
encode the interconnect node id and optionally add a
path tag
Example:
......@@ -44,6 +45,10 @@ components it has to interact with.
Required properties:
interconnects : Pairs of phandles and interconnect provider specifier to denote
the edge source and destination ports of the interconnect path.
An optional path tag value could specified as additional argument
to both endpoints and in such cases, this information will be passed
to the interconnect framework to do aggregation based on the attached
tag.
Optional properties:
interconnect-names : List of interconnect path name strings sorted in the same
......@@ -62,3 +67,20 @@ Example:
interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
interconnect-names = "sdhc-mem";
};
Example with path tags:
gnoc: interconnect@17900000 {
...
interconnect-cells = <2>;
};
mnoc: interconnect@1380000 {
...
interconnect-cells = <2>;
};
cpu@0 {
...
interconnects = <&gnoc MASTER_APPSS_PROC 3 &mnoc SLAVE_EBI1 3>;
}
......@@ -21,6 +21,23 @@ properties:
enum:
- qcom,bcm-voter
qcom,tcs-wait:
description: |
Optional mask of which TCSs (Triggered Command Sets) wait for completion
upon triggering. If not specified, then the AMC and WAKE sets wait for
completion. The mask bits are available in the QCOM_ICC_TAG_* defines.
The AMC TCS is triggered immediately when icc_set_bw() is called. The
WAKE/SLEEP TCSs are triggered when the RSC transitions between active and
sleep modes.
In most cases, it's necessary to wait in both the AMC and WAKE sets to
ensure resources are available before use. If a specific RSC and its use
cases can ensure sufficient delay by other means, then this can be
overridden to reduce latencies.
$ref: /schemas/types.yaml#/definitions/uint32
required:
- compatible
......@@ -39,7 +56,10 @@ examples:
# as defined in Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
- |
#include <dt-bindings/interconnect/qcom,icc.h>
disp_bcm_voter: bcm_voter {
compatible = "qcom,bcm-voter";
qcom,tcs-wait = <QCOM_ICC_TAG_AMC>;
};
...
......@@ -19,6 +19,8 @@ properties:
enum:
- qcom,sc7180-osm-l3
- qcom,sdm845-osm-l3
- qcom,sm8150-osm-l3
- qcom,sm8250-epss-l3
reg:
maxItems: 1
......
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/interconnect/qcom,sdm845.yaml#
$id: http://devicetree.org/schemas/interconnect/qcom,rpmh.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm SDM845 Network-On-Chip Interconnect
title: Qualcomm RPMh Network-On-Chip Interconnect
maintainers:
- Georgi Djakov <georgi.djakov@linaro.org>
- Odelu Kukatla <okukatla@codeaurora.org>
description: |
SDM845 interconnect providers support system bandwidth requirements through
RPMh interconnect providers support system bandwidth requirements through
RPMh hardware accelerators known as Bus Clock Manager (BCM). The provider is
able to communicate with the BCM through the Resource State Coordinator (RSC)
associated with each execution environment. Provider nodes must point to at
......@@ -23,6 +24,19 @@ properties:
compatible:
enum:
- qcom,sc7180-aggre1-noc
- qcom,sc7180-aggre2-noc
- qcom,sc7180-camnoc-virt
- qcom,sc7180-compute-noc
- qcom,sc7180-config-noc
- qcom,sc7180-dc-noc
- qcom,sc7180-gem-noc
- qcom,sc7180-ipa-virt
- qcom,sc7180-mc-virt
- qcom,sc7180-mmss-noc
- qcom,sc7180-npu-noc
- qcom,sc7180-qup-virt
- qcom,sc7180-system-noc
- qcom,sdm845-aggre1-noc
- qcom,sdm845-aggre2-noc
- qcom,sdm845-config-noc
......@@ -31,6 +45,28 @@ properties:
- qcom,sdm845-mem-noc
- qcom,sdm845-mmss-noc
- qcom,sdm845-system-noc
- qcom,sm8150-aggre1-noc
- qcom,sm8150-aggre2-noc
- qcom,sm8150-camnoc-noc
- qcom,sm8150-compute-noc
- qcom,sm8150-config-noc
- qcom,sm8150-dc-noc
- qcom,sm8150-gem-noc
- qcom,sm8150-ipa-virt
- qcom,sm8150-mc-virt
- qcom,sm8150-mmss-noc
- qcom,sm8150-system-noc
- qcom,sm8250-aggre1-noc
- qcom,sm8250-aggre2-noc
- qcom,sm8250-compute-noc
- qcom,sm8250-config-noc
- qcom,sm8250-dc-noc
- qcom,sm8250-gem-noc
- qcom,sm8250-ipa-virt
- qcom,sm8250-mc-virt
- qcom,sm8250-mmss-noc
- qcom,sm8250-npu-noc
- qcom,sm8250-system-noc
'#interconnect-cells':
const: 1
......
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/interconnect/qcom,sc7180.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm SC7180 Network-On-Chip Interconnect
maintainers:
- Odelu Kukatla <okukatla@codeaurora.org>
description: |
SC7180 interconnect providers support system bandwidth requirements through
RPMh hardware accelerators known as Bus Clock Manager (BCM). The provider is
able to communicate with the BCM through the Resource State Coordinator (RSC)
associated with each execution environment. Provider nodes must point to at
least one RPMh device child node pertaining to their RSC and each provider
can map to multiple RPMh resources.
properties:
reg:
maxItems: 1
compatible:
enum:
- qcom,sc7180-aggre1-noc
- qcom,sc7180-aggre2-noc
- qcom,sc7180-camnoc-virt
- qcom,sc7180-compute-noc
- qcom,sc7180-config-noc
- qcom,sc7180-dc-noc
- qcom,sc7180-gem-noc
- qcom,sc7180-ipa-virt
- qcom,sc7180-mc-virt
- qcom,sc7180-mmss-noc
- qcom,sc7180-npu-noc
- qcom,sc7180-qup-virt
- qcom,sc7180-system-noc
'#interconnect-cells':
const: 1
qcom,bcm-voters:
$ref: /schemas/types.yaml#/definitions/phandle-array
description: |
List of phandles to qcom,bcm-voter nodes that are required by
this interconnect to send RPMh commands.
qcom,bcm-voter-names:
$ref: /schemas/types.yaml#/definitions/string-array
description: |
Names for each of the qcom,bcm-voters specified.
required:
- compatible
- reg
- '#interconnect-cells'
- qcom,bcm-voters
additionalProperties: false
examples:
- |
#include <dt-bindings/interconnect/qcom,sc7180.h>
config_noc: interconnect@1500000 {
compatible = "qcom,sc7180-config-noc";
reg = <0x01500000 0x28000>;
#interconnect-cells = <1>;
qcom,bcm-voters = <&apps_bcm_voter>;
};
system_noc: interconnect@1620000 {
compatible = "qcom,sc7180-system-noc";
reg = <0x01620000 0x17080>;
#interconnect-cells = <1>;
qcom,bcm-voters = <&apps_bcm_voter>;
};
mmss_noc: interconnect@1740000 {
compatible = "qcom,sc7180-mmss-noc";
reg = <0x01740000 0x1c100>;
#interconnect-cells = <1>;
qcom,bcm-voters = <&apps_bcm_voter>;
};
......@@ -11,6 +11,7 @@ board specific bus parameters.
Example:
"qcom,soundwire-v1.3.0"
"qcom,soundwire-v1.5.0"
"qcom,soundwire-v1.5.1"
"qcom,soundwire-v1.6.0"
- reg:
Usage: required
......
......@@ -42,6 +42,11 @@ The session is terminated calling :c:func:`close(int fd)`.
A code snippet for an application communicating with Intel AMTHI client:
In order to support virtualization or sandboxing a trusted supervisor
can use :c:macro:`MEI_CONNECT_CLIENT_IOCTL_VTAG` to create
virtual channels with an Intel ME feature. Not all features support
virtual channels such client with answer EOPNOTSUPP.
.. code-block:: C
struct mei_connect_client_data data;
......@@ -110,6 +115,38 @@ Connect to firmware Feature/Client.
data that can be sent or received. (e.g. if MTU=2K, can send
requests up to bytes 2k and received responses up to 2k bytes).
IOCTL_MEI_CONNECT_CLIENT_VTAG:
------------------------------
.. code-block:: none
Usage:
struct mei_connect_client_data_vtag client_data_vtag;
ioctl(fd, IOCTL_MEI_CONNECT_CLIENT_VTAG, &client_data_vtag);
Inputs:
struct mei_connect_client_data_vtag - contain the following
Input field:
in_client_uuid - GUID of the FW Feature that needs
to connect to.
vtag - virtual tag [1, 255]
Outputs:
out_client_properties - Client Properties: MTU and Protocol Version.
Error returns:
ENOTTY No such client (i.e. wrong GUID) or connection is not allowed.
EINVAL Wrong IOCTL Number or tag == 0
ENODEV Device or Connection is not initialized or ready.
ENOMEM Unable to allocate memory to client internal data.
EFAULT Fatal Error (e.g. Unable to access user input data)
EBUSY Connection Already Open
EOPNOTSUPP Vtag is not supported
IOCTL_MEI_NOTIFY_SET
---------------------
......
......@@ -328,8 +328,11 @@ Code Seq# Include File Comments
0xAC 00-1F linux/raw.h
0xAD 00 Netfilter device in development:
<mailto:rusty@rustcorp.com.au>
0xAE all linux/kvm.h Kernel-based Virtual Machine
0xAE 00-1F linux/kvm.h Kernel-based Virtual Machine
<mailto:kvm@vger.kernel.org>
0xAE 40-FF linux/kvm.h Kernel-based Virtual Machine
<mailto:kvm@vger.kernel.org>
0xAE 20-3F linux/nitro_enclaves.h Nitro Enclaves
0xAF 00-1F linux/fsl_hypervisor.h Freescale hypervisor
0xB0 all RATIO devices in development:
<mailto:vgo@ratio.de>
......
......@@ -11,6 +11,7 @@ Linux Virtualization Support
uml/user_mode_linux_howto_v2
paravirt_ops
guest-halt-polling
ne_overview
.. only:: html and subproject
......
.. SPDX-License-Identifier: GPL-2.0
==============
Nitro Enclaves
==============
Overview
========
Nitro Enclaves (NE) is a new Amazon Elastic Compute Cloud (EC2) capability
that allows customers to carve out isolated compute environments within EC2
instances [1].
For example, an application that processes sensitive data and runs in a VM,
can be separated from other applications running in the same VM. This
application then runs in a separate VM than the primary VM, namely an enclave.
An enclave runs alongside the VM that spawned it. This setup matches low latency
applications needs. The resources that are allocated for the enclave, such as
memory and CPUs, are carved out of the primary VM. Each enclave is mapped to a
process running in the primary VM, that communicates with the NE driver via an
ioctl interface.
In this sense, there are two components:
1. An enclave abstraction process - a user space process running in the primary
VM guest that uses the provided ioctl interface of the NE driver to spawn an
enclave VM (that's 2 below).
There is a NE emulated PCI device exposed to the primary VM. The driver for this
new PCI device is included in the NE driver.
The ioctl logic is mapped to PCI device commands e.g. the NE_START_ENCLAVE ioctl
maps to an enclave start PCI command. The PCI device commands are then
translated into actions taken on the hypervisor side; that's the Nitro
hypervisor running on the host where the primary VM is running. The Nitro
hypervisor is based on core KVM technology.
2. The enclave itself - a VM running on the same host as the primary VM that
spawned it. Memory and CPUs are carved out of the primary VM and are dedicated
for the enclave VM. An enclave does not have persistent storage attached.
The memory regions carved out of the primary VM and given to an enclave need to
be aligned 2 MiB / 1 GiB physically contiguous memory regions (or multiple of
this size e.g. 8 MiB). The memory can be allocated e.g. by using hugetlbfs from
user space [2][3]. The memory size for an enclave needs to be at least 64 MiB.
The enclave memory and CPUs need to be from the same NUMA node.
An enclave runs on dedicated cores. CPU 0 and its CPU siblings need to remain
available for the primary VM. A CPU pool has to be set for NE purposes by an
user with admin capability. See the cpu list section from the kernel
documentation [4] for how a CPU pool format looks.
An enclave communicates with the primary VM via a local communication channel,
using virtio-vsock [5]. The primary VM has virtio-pci vsock emulated device,
while the enclave VM has a virtio-mmio vsock emulated device. The vsock device
uses eventfd for signaling. The enclave VM sees the usual interfaces - local
APIC and IOAPIC - to get interrupts from virtio-vsock device. The virtio-mmio
device is placed in memory below the typical 4 GiB.
The application that runs in the enclave needs to be packaged in an enclave
image together with the OS ( e.g. kernel, ramdisk, init ) that will run in the
enclave VM. The enclave VM has its own kernel and follows the standard Linux
boot protocol [6].
The kernel bzImage, the kernel command line, the ramdisk(s) are part of the
Enclave Image Format (EIF); plus an EIF header including metadata such as magic
number, eif version, image size and CRC.
Hash values are computed for the entire enclave image (EIF), the kernel and
ramdisk(s). That's used, for example, to check that the enclave image that is
loaded in the enclave VM is the one that was intended to be run.
These crypto measurements are included in a signed attestation document
generated by the Nitro Hypervisor and further used to prove the identity of the
enclave; KMS is an example of service that NE is integrated with and that checks
the attestation doc.
The enclave image (EIF) is loaded in the enclave memory at offset 8 MiB. The
init process in the enclave connects to the vsock CID of the primary VM and a
predefined port - 9000 - to send a heartbeat value - 0xb7. This mechanism is
used to check in the primary VM that the enclave has booted. The CID of the
primary VM is 3.
If the enclave VM crashes or gracefully exits, an interrupt event is received by
the NE driver. This event is sent further to the user space enclave process
running in the primary VM via a poll notification mechanism. Then the user space
enclave process can exit.
[1] https://aws.amazon.com/ec2/nitro/nitro-enclaves/
[2] https://www.kernel.org/doc/html/latest/admin-guide/mm/hugetlbpage.html
[3] https://lwn.net/Articles/807108/
[4] https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html
[5] https://man7.org/linux/man-pages/man7/vsock.7.html
[6] https://www.kernel.org/doc/html/latest/x86/boot.html
......@@ -6,6 +6,7 @@ Supported chips:
* Maxim ds18*20 based temperature sensors.
* Maxim ds1825 based temperature sensors.
* GXCAS GC20MH01 temperature sensor.
Author: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
......@@ -13,8 +14,8 @@ Author: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Description
-----------
w1_therm provides basic temperature conversion for ds18*20 devices, and the
ds28ea00 device.
w1_therm provides basic temperature conversion for ds18*20, ds28ea00, GX20MH01
devices.
Supported family codes:
......@@ -26,59 +27,72 @@ W1_THERM_DS1825 0x3B
W1_THERM_DS28EA00 0x42
==================== ====
Support is provided through the sysfs w1_slave file. Each open and
read sequence will initiate a temperature conversion then provide two
Support is provided through the sysfs entry ``w1_slave``. Each open and
read sequence will initiate a temperature conversion, then provide two
lines of ASCII output. The first line contains the nine hex bytes
read along with a calculated crc value and YES or NO if it matched.
If the crc matched the returned values are retained. The second line
displays the retained values along with a temperature in millidegrees
Centigrade after t=.
Alternatively, temperature can be read using temperature sysfs, it
return only temperature in millidegrees Centigrade.
Alternatively, temperature can be read using ``temperature`` sysfs, it
returns only the temperature in millidegrees Centigrade.
A bulk read of all devices on the bus could be done writing 'trigger'
in the therm_bulk_read sysfs entry at w1_bus_master level. This will
sent the convert command on all devices on the bus, and if parasite
powered devices are detected on the bus (and strong pullup is enable
A bulk read of all devices on the bus could be done writing ``trigger``
to ``therm_bulk_read`` entry at w1_bus_master level. This will
send the convert command to all devices on the bus, and if parasite
powered devices are detected on the bus (and strong pullup is enabled
in the module), it will drive the line high during the longer conversion
time required by parasited powered device on the line. Reading
therm_bulk_read will return 0 if no bulk conversion pending,
``therm_bulk_read`` will return 0 if no bulk conversion pending,
-1 if at least one sensor still in conversion, 1 if conversion is complete
but at least one sensor value has not been read yet. Result temperature is
then accessed by reading the temperature sysfs entry of each device, which
then accessed by reading the ``temperature`` entry of each device, which
may return empty if conversion is still in progress. Note that if a bulk
read is sent but one sensor is not read immediately, the next access to
temperature on this device will return the temperature measured at the
``temperature`` on this device will return the temperature measured at the
time of issue of the bulk read command (not the current temperature).
Writing a value between 9 and 12 to the sysfs w1_slave file will change the
precision of the sensor for the next readings. This value is in (volatile)
SRAM, so it is reset when the sensor gets power-cycled.
A strong pullup will be applied during the conversion if required.
To store the current precision configuration into EEPROM, the value 0
has to be written to the sysfs w1_slave file. Since the EEPROM has a limited
amount of writes (>50k), this command should be used wisely.
``conv_time`` is used to get current conversion time (read), and
adjust it (write). A temperature conversion time depends on the device type and
it's current resolution. Default conversion time is set by the driver according
to the device datasheet. A conversion time for many original device clones
deviate from datasheet specs. There are three options: 1) manually set the
correct conversion time by writing a value in milliseconds to ``conv_time``; 2)
auto measure and set a conversion time by writing ``1`` to
``conv_time``; 3) use ``features`` to enable poll for conversion
completion. Options 2, 3 can't be used in parasite power mode. To get back to
the default conversion time write ``0`` to ``conv_time``.
Alternatively, resolution can be set or read (value from 9 to 12) using the
dedicated resolution sysfs entry on each device. This sysfs entry is not
present for devices not supporting this feature. Driver will adjust the
correct conversion time for each device regarding to its resolution setting.
In particular, strong pullup will be applied if required during the conversion
duration.
Writing a resolution value (in bits) to ``w1_slave`` will change the
precision of the sensor for the next readings. Allowed resolutions are defined by
the sensor. Resolution is reset when the sensor gets power-cycled.
The write-only sysfs entry eeprom is an alternative for EEPROM operations:
* 'save': will save device RAM to EEPROM
* 'restore': will restore EEPROM data in device RAM.
To store the current resolution in EEPROM, write ``0`` to ``w1_slave``.
Since the EEPROM has a limited amount of writes (>50k), this command should be
used wisely.
ext_power syfs entry allow tho check the power status of each device.
* '0': device parasite powered
* '1': device externally powered
Alternatively, resolution can be read or written using the dedicated
``resolution`` entry on each device, if supported by the sensor.
sysfs alarms allow read or write TH and TL (Temperature High an Low) alarms.
Some non-genuine DS18B20 chips are fixed in 12-bit mode only, so the actual
resolution is read back from the chip and verified.
Note: Changing the resolution reverts the conversion time to default.
The write-only sysfs entry ``eeprom`` is an alternative for EEPROM operations.
Write ``save`` to save device RAM to EEPROM. Write ``restore`` to restore EEPROM
data in device RAM.
``ext_power`` entry allows checking the power state of each device. Reads
``0`` if the device is parasite powered, ``1`` if the device is externally powered.
Sysfs ``alarms`` allow read or write TH and TL (Temperature High an Low) alarms.
Values shall be space separated and in the device range (typical -55 degC
to 125 degC). Values are integer as they are store in a 8bit register in
the device. Lowest value is automatically put to TL.Once set, alarms could
the device. Lowest value is automatically put to TL. Once set, alarms could
be search at master level.
The module parameter strong_pullup can be set to 0 to disable the
......@@ -102,5 +116,24 @@ The DS28EA00 provides an additional two pins for implementing a sequence
detection algorithm. This feature allows you to determine the physical
location of the chip in the 1-wire bus without needing pre-existing
knowledge of the bus ordering. Support is provided through the sysfs
w1_seq file. The file will contain a single line with an integer value
``w1_seq``. The file will contain a single line with an integer value
representing the device index in the bus starting at 0.
``features`` sysfs entry controls optional driver settings per device.
Insufficient power in parasite mode, line noise and insufficient conversion
time may lead to conversion failure. Original DS18B20 and some clones allow for
detection of invalid conversion. Write bit mask ``1`` to ``features`` to enable
checking the conversion success. If byte 6 of scratchpad memory is 0xC after
conversion and temperature reads 85.00 (powerup value) or 127.94 (insufficient
power), the driver returns a conversion error. Bit mask ``2`` enables poll for
conversion completion (normal power only) by generating read cycles on the bus
after conversion starts. In parasite power mode this feature is not available.
Feature bit masks may be combined (OR). More details in
Documentation/ABI/testing/sysfs-driver-w1_therm
GX20MH01 device shares family number 0x28 with DS18*20. The device is generally
compatible with DS18B20. Added are lowest 2\ :sup:`-5`, 2\ :sup:`-6` temperature
bits in Config register; R2 bit in Config register enabling 13 and 14 bit
resolutions. The device is powered up in 14-bit resolution mode. The conversion
times specified in the datasheet are too low and have to be increased. The
device supports driver features ``1`` and ``2``.
......@@ -1725,6 +1725,7 @@ ARM/CORESIGHT FRAMEWORK AND DRIVERS
M: Mathieu Poirier <mathieu.poirier@linaro.org>
R: Suzuki K Poulose <suzuki.poulose@arm.com>
R: Mike Leach <mike.leach@linaro.org>
L: coresight@lists.linaro.org (moderated for non-subscribers)
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: Documentation/ABI/testing/sysfs-bus-coresight-devices-*
......@@ -4101,6 +4102,11 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
F: drivers/char/
F: drivers/misc/
F: include/linux/miscdevice.h
X: drivers/char/agp/
X: drivers/char/hw_random/
X: drivers/char/ipmi/
X: drivers/char/random.c
X: drivers/char/tpm/
CHECKPATCH
M: Andy Whitcroft <apw@canonical.com>
......@@ -6828,14 +6834,17 @@ F: drivers/net/ethernet/nvidia/*
FPGA DFL DRIVERS
M: Wu Hao <hao.wu@intel.com>
R: Tom Rix <trix@redhat.com>
L: linux-fpga@vger.kernel.org
S: Maintained
F: Documentation/ABI/testing/sysfs-bus-dfl
F: Documentation/fpga/dfl.rst
F: drivers/fpga/dfl*
F: include/uapi/linux/fpga-dfl.h
FPGA MANAGER FRAMEWORK
M: Moritz Fischer <mdf@kernel.org>
R: Tom Rix <trix@redhat.com>
L: linux-fpga@vger.kernel.org
S: Maintained
W: http://www.rocketboards.org
......@@ -7885,6 +7894,13 @@ W: http://www.hisilicon.com
F: Documentation/devicetree/bindings/net/hisilicon*.txt
F: drivers/net/ethernet/hisilicon/
HIKEY960 ONBOARD USB GPIO HUB DRIVER
M: John Stultz <john.stultz@linaro.org>
L: linux-kernel@vger.kernel.org
S: Maintained
F: drivers/misc/hisi_hikey_usb.c
F: Documentation/devicetree/bindings/misc/hisilicon-hikey-usb.yaml
HISILICON PMU DRIVER
M: Shaokun Zhang <zhangshaokun@hisilicon.com>
S: Supported
......@@ -11353,6 +11369,7 @@ M: Hemant Kumar <hemantk@codeaurora.org>
L: linux-arm-msm@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi.git
F: Documentation/ABI/stable/sysfs-bus-mhi
F: Documentation/mhi/
F: drivers/bus/mhi/
F: include/linux/mhi.h
......@@ -12307,6 +12324,19 @@ S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lftan/nios2.git
F: arch/nios2/
NITRO ENCLAVES (NE)
M: Andra Paraschiv <andraprs@amazon.com>
M: Alexandru Vasile <lexnv@amazon.com>
M: Alexandru Ciobotaru <alcioa@amazon.com>
L: linux-kernel@vger.kernel.org
S: Supported
W: https://aws.amazon.com/ec2/nitro/nitro-enclaves/
F: Documentation/virt/ne_overview.rst
F: drivers/virt/nitro_enclaves/
F: include/linux/nitro_enclaves.h
F: include/uapi/linux/nitro_enclaves.h
F: samples/nitro_enclaves/
NOHZ, DYNTICKS SUPPORT
M: Frederic Weisbecker <fweisbec@gmail.com>
M: Thomas Gleixner <tglx@linutronix.de>
......@@ -12469,6 +12499,13 @@ F: drivers/iio/gyro/fxas21002c_core.c
F: drivers/iio/gyro/fxas21002c_i2c.c
F: drivers/iio/gyro/fxas21002c_spi.c
NXP PTN5150A CC LOGIC AND EXTCON DRIVER
M: Krzysztof Kozlowski <krzk@kernel.org>
L: linux-kernel@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/extcon/extcon-ptn5150.yaml
F: drivers/extcon/extcon-ptn5150.c
NXP SGTL5000 DRIVER
M: Fabio Estevam <festevam@gmail.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
......
......@@ -223,7 +223,7 @@ static struct binder_transaction_log_entry *binder_transaction_log_add(
struct binder_work {
struct list_head entry;
enum {
enum binder_work_type {
BINDER_WORK_TRANSACTION = 1,
BINDER_WORK_TRANSACTION_COMPLETE,
BINDER_WORK_RETURN_ERROR,
......@@ -885,27 +885,6 @@ static struct binder_work *binder_dequeue_work_head_ilocked(
return w;
}
/**
* binder_dequeue_work_head() - Dequeues the item at head of list
* @proc: binder_proc associated with list
* @list: list to dequeue head
*
* Removes the head of the list if there are items on the list
*
* Return: pointer dequeued binder_work, NULL if list was empty
*/
static struct binder_work *binder_dequeue_work_head(
struct binder_proc *proc,
struct list_head *list)
{
struct binder_work *w;
binder_inner_proc_lock(proc);
w = binder_dequeue_work_head_ilocked(list);
binder_inner_proc_unlock(proc);
return w;
}
static void
binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer);
static void binder_free_thread(struct binder_thread *thread);
......@@ -2344,8 +2323,6 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
* file is done when the transaction is torn
* down.
*/
WARN_ON(failed_at &&
proc->tsk == current->group_leader);
} break;
case BINDER_TYPE_PTR:
/*
......@@ -3136,7 +3113,7 @@ static void binder_transaction(struct binder_proc *proc,
t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
tr->offsets_size, extra_buffers_size,
!reply && (t->flags & TF_ONE_WAY));
!reply && (t->flags & TF_ONE_WAY), current->tgid);
if (IS_ERR(t->buffer)) {
/*
* -ESRCH indicates VMA cleared. The target is dying.
......@@ -4587,13 +4564,17 @@ static void binder_release_work(struct binder_proc *proc,
struct list_head *list)
{
struct binder_work *w;
enum binder_work_type wtype;
while (1) {
w = binder_dequeue_work_head(proc, list);
binder_inner_proc_lock(proc);
w = binder_dequeue_work_head_ilocked(list);
wtype = w ? w->type : 0;
binder_inner_proc_unlock(proc);
if (!w)
return;
switch (w->type) {
switch (wtype) {
case BINDER_WORK_TRANSACTION: {
struct binder_transaction *t;
......@@ -4627,9 +4608,11 @@ static void binder_release_work(struct binder_proc *proc,
kfree(death);
binder_stats_deleted(BINDER_STAT_DEATH);
} break;
case BINDER_WORK_NODE:
break;
default:
pr_err("unexpected work type, %d, not freed\n",
w->type);
wtype);
break;
}
}
......@@ -5182,9 +5165,7 @@ static const struct vm_operations_struct binder_vm_ops = {
static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
{
int ret;
struct binder_proc *proc = filp->private_data;
const char *failure_string;
if (proc->tsk != current->group_leader)
return -EINVAL;
......@@ -5196,9 +5177,9 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
(unsigned long)pgprot_val(vma->vm_page_prot));
if (vma->vm_flags & FORBIDDEN_MMAP_FLAGS) {
ret = -EPERM;
failure_string = "bad vm_flags";
goto err_bad_arg;
pr_err("%s: %d %lx-%lx %s failed %d\n", __func__,
proc->pid, vma->vm_start, vma->vm_end, "bad vm_flags", -EPERM);
return -EPERM;
}
vma->vm_flags |= VM_DONTCOPY | VM_MIXEDMAP;
vma->vm_flags &= ~VM_MAYWRITE;
......@@ -5206,15 +5187,7 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
vma->vm_ops = &binder_vm_ops;
vma->vm_private_data = proc;
ret = binder_alloc_mmap_handler(&proc->alloc, vma);
if (ret)
return ret;
return 0;
err_bad_arg:
pr_err("%s: %d %lx-%lx %s failed %d\n", __func__,
proc->pid, vma->vm_start, vma->vm_end, failure_string, ret);
return ret;
return binder_alloc_mmap_handler(&proc->alloc, vma);
}
static int binder_open(struct inode *nodp, struct file *filp)
......
......@@ -338,12 +338,50 @@ static inline struct vm_area_struct *binder_alloc_get_vma(
return vma;
}
static void debug_low_async_space_locked(struct binder_alloc *alloc, int pid)
{
/*
* Find the amount and size of buffers allocated by the current caller;
* The idea is that once we cross the threshold, whoever is responsible
* for the low async space is likely to try to send another async txn,
* and at some point we'll catch them in the act. This is more efficient
* than keeping a map per pid.
*/
struct rb_node *n;
struct binder_buffer *buffer;
size_t total_alloc_size = 0;
size_t num_buffers = 0;
for (n = rb_first(&alloc->allocated_buffers); n != NULL;
n = rb_next(n)) {
buffer = rb_entry(n, struct binder_buffer, rb_node);
if (buffer->pid != pid)
continue;
if (!buffer->async_transaction)
continue;
total_alloc_size += binder_alloc_buffer_size(alloc, buffer)
+ sizeof(struct binder_buffer);
num_buffers++;
}
/*
* Warn if this pid has more than 50 transactions, or more than 50% of
* async space (which is 25% of total buffer size).
*/
if (num_buffers > 50 || total_alloc_size > alloc->buffer_size / 4) {
binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
"%d: pid %d spamming oneway? %zd buffers allocated for a total size of %zd\n",
alloc->pid, pid, num_buffers, total_alloc_size);
}
}
static struct binder_buffer *binder_alloc_new_buf_locked(
struct binder_alloc *alloc,
size_t data_size,
size_t offsets_size,
size_t extra_buffers_size,
int is_async)
int is_async,
int pid)
{
struct rb_node *n = alloc->free_buffers.rb_node;
struct binder_buffer *buffer;
......@@ -486,11 +524,20 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
buffer->offsets_size = offsets_size;
buffer->async_transaction = is_async;
buffer->extra_buffers_size = extra_buffers_size;
buffer->pid = pid;
if (is_async) {
alloc->free_async_space -= size + sizeof(struct binder_buffer);
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
"%d: binder_alloc_buf size %zd async free %zd\n",
alloc->pid, size, alloc->free_async_space);
if (alloc->free_async_space < alloc->buffer_size / 10) {
/*
* Start detecting spammers once we have less than 20%
* of async space left (which is less than 10% of total
* buffer size).
*/
debug_low_async_space_locked(alloc, pid);
}
}
return buffer;
......@@ -508,6 +555,7 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
* @offsets_size: user specified buffer offset
* @extra_buffers_size: size of extra space for meta-data (eg, security context)
* @is_async: buffer for async transaction
* @pid: pid to attribute allocation to (used for debugging)
*
* Allocate a new buffer given the requested sizes. Returns
* the kernel version of the buffer pointer. The size allocated
......@@ -520,13 +568,14 @@ struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
size_t data_size,
size_t offsets_size,
size_t extra_buffers_size,
int is_async)
int is_async,
int pid)
{
struct binder_buffer *buffer;
mutex_lock(&alloc->mutex);
buffer = binder_alloc_new_buf_locked(alloc, data_size, offsets_size,
extra_buffers_size, is_async);
extra_buffers_size, is_async, pid);
mutex_unlock(&alloc->mutex);
return buffer;
}
......@@ -652,7 +701,7 @@ static void binder_free_buf_locked(struct binder_alloc *alloc,
* @alloc: binder_alloc for this proc
* @buffer: kernel pointer to buffer
*
* Free the buffer allocated via binder_alloc_new_buffer()
* Free the buffer allocated via binder_alloc_new_buf()
*/
void binder_alloc_free_buf(struct binder_alloc *alloc,
struct binder_buffer *buffer)
......
......@@ -32,6 +32,7 @@ struct binder_transaction;
* @offsets_size: size of array of offsets
* @extra_buffers_size: size of space for other objects (like sg lists)
* @user_data: user pointer to base of buffer space
* @pid: pid to attribute the buffer to (caller)
*
* Bookkeeping structure for binder transaction buffers
*/
......@@ -51,6 +52,7 @@ struct binder_buffer {
size_t offsets_size;
size_t extra_buffers_size;
void __user *user_data;
int pid;
};
/**
......@@ -117,7 +119,8 @@ extern struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
size_t data_size,
size_t offsets_size,
size_t extra_buffers_size,
int is_async);
int is_async,
int pid);
extern void binder_alloc_init(struct binder_alloc *alloc);
extern int binder_alloc_shrinker_init(void);
extern void binder_alloc_vma_close(struct binder_alloc *alloc);
......
......@@ -119,7 +119,7 @@ static void binder_selftest_alloc_buf(struct binder_alloc *alloc,
int i;
for (i = 0; i < BUFFER_NUM; i++) {
buffers[i] = binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0);
buffers[i] = binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0, 0);
if (IS_ERR(buffers[i]) ||
!check_buffer_pages_allocated(alloc, buffers[i],
sizes[i])) {
......
......@@ -63,7 +63,7 @@ static const struct constant_table binderfs_param_stats[] = {
{}
};
const struct fs_parameter_spec binderfs_fs_parameters[] = {
static const struct fs_parameter_spec binderfs_fs_parameters[] = {
fsparam_u32("max", Opt_max),
fsparam_enum("stats", Opt_stats_mode, binderfs_param_stats),
{}
......
......@@ -272,9 +272,9 @@ static ssize_t firmware_loading_store(struct device *dev,
dev_err(dev, "%s: map pages failed\n",
__func__);
else
rc = security_kernel_post_read_file(NULL,
fw_priv->data, fw_priv->size,
READING_FIRMWARE);
rc = security_kernel_post_load_data(fw_priv->data,
fw_priv->size,
LOADING_FIRMWARE, "blob");
/*
* Same logic as fw_load_abort, only the DONE bit
......@@ -490,13 +490,11 @@ fw_create_instance(struct firmware *firmware, const char *fw_name,
/**
* fw_load_sysfs_fallback() - load a firmware via the sysfs fallback mechanism
* @fw_sysfs: firmware sysfs information for the firmware to load
* @opt_flags: flags of options, FW_OPT_*
* @timeout: timeout to wait for the load
*
* In charge of constructing a sysfs fallback interface for firmware loading.
**/
static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs,
u32 opt_flags, long timeout)
static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs, long timeout)
{
int retval = 0;
struct device *f_dev = &fw_sysfs->dev;
......@@ -518,7 +516,7 @@ static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs,
list_add(&fw_priv->pending_list, &pending_fw_head);
mutex_unlock(&fw_lock);
if (opt_flags & FW_OPT_UEVENT) {
if (fw_priv->opt_flags & FW_OPT_UEVENT) {
fw_priv->need_uevent = true;
dev_set_uevent_suppress(f_dev, false);
dev_dbg(f_dev, "firmware: requesting %s\n", fw_priv->fw_name);
......@@ -580,10 +578,10 @@ static int fw_load_from_user_helper(struct firmware *firmware,
}
fw_sysfs->fw_priv = firmware->priv;
ret = fw_load_sysfs_fallback(fw_sysfs, opt_flags, timeout);
ret = fw_load_sysfs_fallback(fw_sysfs, timeout);
if (!ret)
ret = assign_fw(firmware, device, opt_flags);
ret = assign_fw(firmware, device);
out_unlock:
usermodehelper_read_unlock();
......@@ -613,7 +611,7 @@ static bool fw_run_sysfs_fallback(u32 opt_flags)
return false;
/* Also permit LSMs and IMA to fail firmware sysfs fallback */
ret = security_kernel_load_data(LOADING_FIRMWARE);
ret = security_kernel_load_data(LOADING_FIRMWARE, true);
if (ret < 0)
return false;
......@@ -625,7 +623,8 @@ static bool fw_run_sysfs_fallback(u32 opt_flags)
* @fw: pointer to firmware image
* @name: name of firmware file to look for
* @device: device for which firmware is being loaded
* @opt_flags: options to control firmware loading behaviour
* @opt_flags: options to control firmware loading behaviour, as defined by
* &enum fw_opt
* @ret: return value from direct lookup which triggered the fallback mechanism
*
* This function is called if direct lookup for the firmware failed, it enables
......
......@@ -67,10 +67,9 @@ static inline void unregister_sysfs_loader(void)
#endif /* CONFIG_FW_LOADER_USER_HELPER */
#ifdef CONFIG_EFI_EMBEDDED_FIRMWARE
int firmware_fallback_platform(struct fw_priv *fw_priv, u32 opt_flags);
int firmware_fallback_platform(struct fw_priv *fw_priv);
#else
static inline int firmware_fallback_platform(struct fw_priv *fw_priv,
u32 opt_flags)
static inline int firmware_fallback_platform(struct fw_priv *fw_priv)
{
return -ENOENT;
}
......
......@@ -8,16 +8,16 @@
#include "fallback.h"
#include "firmware.h"
int firmware_fallback_platform(struct fw_priv *fw_priv, u32 opt_flags)
int firmware_fallback_platform(struct fw_priv *fw_priv)
{
const u8 *data;
size_t size;
int rc;
if (!(opt_flags & FW_OPT_FALLBACK_PLATFORM))
if (!(fw_priv->opt_flags & FW_OPT_FALLBACK_PLATFORM))
return -ENOENT;
rc = security_kernel_load_data(LOADING_FIRMWARE_EFI_EMBEDDED);
rc = security_kernel_load_data(LOADING_FIRMWARE, true);
if (rc)
return rc;
......@@ -27,6 +27,12 @@ int firmware_fallback_platform(struct fw_priv *fw_priv, u32 opt_flags)
if (fw_priv->data && size > fw_priv->allocated_size)
return -ENOMEM;
rc = security_kernel_post_load_data((u8 *)data, size, LOADING_FIRMWARE,
"platform");
if (rc)
return rc;
if (!fw_priv->data)
fw_priv->data = vmalloc(size);
if (!fw_priv->data)
......
......@@ -32,6 +32,8 @@
* @FW_OPT_FALLBACK_PLATFORM: Enable fallback to device fw copy embedded in
* the platform's main firmware. If both this fallback and the sysfs
* fallback are enabled, then this fallback will be tried first.
* @FW_OPT_PARTIAL: Allow partial read of firmware instead of needing to read
* entire file.
*/
enum fw_opt {
FW_OPT_UEVENT = BIT(0),
......@@ -41,6 +43,7 @@ enum fw_opt {
FW_OPT_NOCACHE = BIT(4),
FW_OPT_NOFALLBACK_SYSFS = BIT(5),
FW_OPT_FALLBACK_PLATFORM = BIT(6),
FW_OPT_PARTIAL = BIT(7),
};
enum fw_status {
......@@ -68,6 +71,8 @@ struct fw_priv {
void *data;
size_t size;
size_t allocated_size;
size_t offset;
u32 opt_flags;
#ifdef CONFIG_FW_LOADER_PAGED_BUF
bool is_paged_buf;
struct page **pages;
......@@ -136,7 +141,7 @@ static inline void fw_state_done(struct fw_priv *fw_priv)
__fw_state_set(fw_priv, FW_STATUS_DONE);
}
int assign_fw(struct firmware *fw, struct device *device, u32 opt_flags);
int assign_fw(struct firmware *fw, struct device *device);
#ifdef CONFIG_FW_LOADER_PAGED_BUF
void fw_free_paged_buf(struct fw_priv *fw_priv);
......
This diff is collapsed.
This diff is collapsed.
// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
/*
* Copyright 2013-2016 Freescale Semiconductor Inc.
* Copyright 2020 NXP
*
*/
#include <linux/kernel.h>
......@@ -8,6 +9,13 @@
#include "fsl-mc-private.h"
/*
* cache the DPRC version to reduce the number of commands
* towards the mc firmware
*/
static u16 dprc_major_ver;
static u16 dprc_minor_ver;
/**
* dprc_open() - Open DPRC object for use
* @mc_io: Pointer to MC portal's I/O object
......@@ -72,6 +80,77 @@ int dprc_close(struct fsl_mc_io *mc_io,
}
EXPORT_SYMBOL_GPL(dprc_close);
/**
* dprc_reset_container - Reset child container.
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPRC object
* @child_container_id: ID of the container to reset
* @options: 32 bit options:
* - 0 (no bits set) - all the objects inside the container are
* reset. The child containers are entered recursively and the
* objects reset. All the objects (including the child containers)
* are closed.
* - bit 0 set - all the objects inside the container are reset.
* However the child containers are not entered recursively.
* This option is supported for API versions >= 6.5
* In case a software context crashes or becomes non-responsive, the parent
* may wish to reset its resources container before the software context is
* restarted.
*
* This routine informs all objects assigned to the child container that the
* container is being reset, so they may perform any cleanup operations that are
* needed. All objects handles that were owned by the child container shall be
* closed.
*
* Note that such request may be submitted even if the child software context
* has not crashed, but the resulting object cleanup operations will not be
* aware of that.
*
* Return: '0' on Success; Error code otherwise.
*/
int dprc_reset_container(struct fsl_mc_io *mc_io,
u32 cmd_flags,
u16 token,
int child_container_id,
u32 options)
{
struct fsl_mc_command cmd = { 0 };
struct dprc_cmd_reset_container *cmd_params;
u32 cmdid = DPRC_CMDID_RESET_CONT;
int err;
/*
* If the DPRC object version was not yet cached, cache it now.
* Otherwise use the already cached value.
*/
if (!dprc_major_ver && !dprc_minor_ver) {
err = dprc_get_api_version(mc_io, 0,
&dprc_major_ver,
&dprc_minor_ver);
if (err)
return err;
}
/*
* MC API 6.5 introduced a new field in the command used to pass
* some flags.
* Bit 0 indicates that the child containers are not recursively reset.
*/
if (dprc_major_ver > 6 || (dprc_major_ver == 6 && dprc_minor_ver >= 5))
cmdid = DPRC_CMDID_RESET_CONT_V2;
/* prepare command */
cmd.header = mc_encode_cmd_header(cmdid, cmd_flags, token);
cmd_params = (struct dprc_cmd_reset_container *)cmd.params;
cmd_params->child_container_id = cpu_to_le32(child_container_id);
cmd_params->options = cpu_to_le32(options);
/* send command to mc*/
return mc_send_command(mc_io, &cmd);
}
EXPORT_SYMBOL_GPL(dprc_reset_container);
/**
* dprc_set_irq() - Set IRQ information for the DPRC to trigger an interrupt.
* @mc_io: Pointer to MC portal's I/O object
......@@ -281,7 +360,7 @@ int dprc_get_attributes(struct fsl_mc_io *mc_io,
/* retrieve response parameters */
rsp_params = (struct dprc_rsp_get_attributes *)cmd.params;
attr->container_id = le32_to_cpu(rsp_params->container_id);
attr->icid = le16_to_cpu(rsp_params->icid);
attr->icid = le32_to_cpu(rsp_params->icid);
attr->options = le32_to_cpu(rsp_params->options);
attr->portal_id = le32_to_cpu(rsp_params->portal_id);
......@@ -443,30 +522,44 @@ int dprc_get_obj_region(struct fsl_mc_io *mc_io,
struct fsl_mc_command cmd = { 0 };
struct dprc_cmd_get_obj_region *cmd_params;
struct dprc_rsp_get_obj_region *rsp_params;
u16 major_ver, minor_ver;
int err;
/* prepare command */
err = dprc_get_api_version(mc_io, 0,
&major_ver,
&minor_ver);
if (err)
return err;
/**
* MC API version 6.3 introduced a new field to the region
* descriptor: base_address. If the older API is in use then the base
* address is set to zero to indicate it needs to be obtained elsewhere
* (typically the device tree).
*/
if (major_ver > 6 || (major_ver == 6 && minor_ver >= 3))
cmd.header =
mc_encode_cmd_header(DPRC_CMDID_GET_OBJ_REG_V2,
cmd_flags, token);
else
cmd.header =
mc_encode_cmd_header(DPRC_CMDID_GET_OBJ_REG,
cmd_flags, token);
/*
* If the DPRC object version was not yet cached, cache it now.
* Otherwise use the already cached value.
*/
if (!dprc_major_ver && !dprc_minor_ver) {
err = dprc_get_api_version(mc_io, 0,
&dprc_major_ver,
&dprc_minor_ver);
if (err)
return err;
}
if (dprc_major_ver > 6 || (dprc_major_ver == 6 && dprc_minor_ver >= 6)) {
/*
* MC API version 6.6 changed the size of the MC portals and software
* portals to 64K (as implemented by hardware). If older API is in use the
* size reported is less (64 bytes for mc portals and 4K for software
* portals).
*/
cmd.header = mc_encode_cmd_header(DPRC_CMDID_GET_OBJ_REG_V3,
cmd_flags, token);
} else if (dprc_major_ver == 6 && dprc_minor_ver >= 3) {
/*
* MC API version 6.3 introduced a new field to the region
* descriptor: base_address. If the older API is in use then the base
* address is set to zero to indicate it needs to be obtained elsewhere
* (typically the device tree).
*/
cmd.header = mc_encode_cmd_header(DPRC_CMDID_GET_OBJ_REG_V2,
cmd_flags, token);
} else {
cmd.header = mc_encode_cmd_header(DPRC_CMDID_GET_OBJ_REG,
cmd_flags, token);
}
cmd_params = (struct dprc_cmd_get_obj_region *)cmd.params;
cmd_params->obj_id = cpu_to_le32(obj_id);
......@@ -483,7 +576,7 @@ int dprc_get_obj_region(struct fsl_mc_io *mc_io,
rsp_params = (struct dprc_rsp_get_obj_region *)cmd.params;
region_desc->base_offset = le64_to_cpu(rsp_params->base_offset);
region_desc->size = le32_to_cpu(rsp_params->size);
if (major_ver > 6 || (major_ver == 6 && minor_ver >= 3))
if (dprc_major_ver > 6 || (dprc_major_ver == 6 && dprc_minor_ver >= 3))
region_desc->base_address = le64_to_cpu(rsp_params->base_addr);
else
region_desc->base_address = 0;
......
......@@ -344,7 +344,7 @@ EXPORT_SYMBOL_GPL(fsl_mc_object_free);
* Initialize the interrupt pool associated with an fsl-mc bus.
* It allocates a block of IRQs from the GIC-ITS.
*/
int fsl_mc_populate_irq_pool(struct fsl_mc_bus *mc_bus,
int fsl_mc_populate_irq_pool(struct fsl_mc_device *mc_bus_dev,
unsigned int irq_count)
{
unsigned int i;
......@@ -352,10 +352,14 @@ int fsl_mc_populate_irq_pool(struct fsl_mc_bus *mc_bus,
struct fsl_mc_device_irq *irq_resources;
struct fsl_mc_device_irq *mc_dev_irq;
int error;
struct fsl_mc_device *mc_bus_dev = &mc_bus->mc_dev;
struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_bus_dev);
struct fsl_mc_resource_pool *res_pool =
&mc_bus->resource_pools[FSL_MC_POOL_IRQ];
/* do nothing if the IRQ pool is already populated */
if (mc_bus->irq_resources)
return 0;
if (irq_count == 0 ||
irq_count > FSL_MC_IRQ_POOL_MAX_TOTAL_IRQS)
return -EINVAL;
......@@ -407,9 +411,9 @@ EXPORT_SYMBOL_GPL(fsl_mc_populate_irq_pool);
* Teardown the interrupt pool associated with an fsl-mc bus.
* It frees the IRQs that were allocated to the pool, back to the GIC-ITS.
*/
void fsl_mc_cleanup_irq_pool(struct fsl_mc_bus *mc_bus)
void fsl_mc_cleanup_irq_pool(struct fsl_mc_device *mc_bus_dev)
{
struct fsl_mc_device *mc_bus_dev = &mc_bus->mc_dev;
struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_bus_dev);
struct fsl_mc_resource_pool *res_pool =
&mc_bus->resource_pools[FSL_MC_POOL_IRQ];
......
......@@ -3,6 +3,7 @@
* Freescale Management Complex (MC) bus driver
*
* Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
* Copyright 2019-2020 NXP
* Author: German Rivera <German.Rivera@freescale.com>
*
*/
......@@ -78,6 +79,12 @@ static int fsl_mc_bus_match(struct device *dev, struct device_driver *drv)
struct fsl_mc_driver *mc_drv = to_fsl_mc_driver(drv);
bool found = false;
/* When driver_override is set, only bind to the matching driver */
if (mc_dev->driver_override) {
found = !strcmp(mc_dev->driver_override, mc_drv->driver.name);
goto out;
}
if (!mc_drv->match_id_table)
goto out;
......@@ -147,8 +154,52 @@ static ssize_t modalias_show(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR_RO(modalias);
static ssize_t driver_override_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev);
char *driver_override, *old = mc_dev->driver_override;
char *cp;
if (WARN_ON(dev->bus != &fsl_mc_bus_type))
return -EINVAL;
if (count >= (PAGE_SIZE - 1))
return -EINVAL;
driver_override = kstrndup(buf, count, GFP_KERNEL);
if (!driver_override)
return -ENOMEM;
cp = strchr(driver_override, '\n');
if (cp)
*cp = '\0';
if (strlen(driver_override)) {
mc_dev->driver_override = driver_override;
} else {
kfree(driver_override);
mc_dev->driver_override = NULL;
}
kfree(old);
return count;
}
static ssize_t driver_override_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev);
return snprintf(buf, PAGE_SIZE, "%s\n", mc_dev->driver_override);
}
static DEVICE_ATTR_RW(driver_override);
static struct attribute *fsl_mc_dev_attrs[] = {
&dev_attr_modalias.attr,
&dev_attr_driver_override.attr,
NULL,
};
......@@ -452,7 +503,7 @@ static int get_dprc_attr(struct fsl_mc_io *mc_io,
}
static int get_dprc_icid(struct fsl_mc_io *mc_io,
int container_id, u16 *icid)
int container_id, u32 *icid)
{
struct dprc_attributes attr;
int error;
......@@ -564,11 +615,8 @@ static int fsl_mc_device_get_mmio_regions(struct fsl_mc_device *mc_dev,
regions[i].end = regions[i].start + region_desc.size - 1;
regions[i].name = "fsl-mc object MMIO region";
regions[i].flags = IORESOURCE_IO;
if (region_desc.flags & DPRC_REGION_CACHEABLE)
regions[i].flags |= IORESOURCE_CACHEABLE;
if (region_desc.flags & DPRC_REGION_SHAREABLE)
regions[i].flags |= IORESOURCE_MEM;
regions[i].flags = region_desc.flags & IORESOURCE_BITS;
regions[i].flags |= IORESOURCE_MEM;
}
mc_dev->regions = regions;
......@@ -630,6 +678,7 @@ int fsl_mc_device_add(struct fsl_mc_obj_desc *obj_desc,
if (!mc_bus)
return -ENOMEM;
mutex_init(&mc_bus->scan_mutex);
mc_dev = &mc_bus->mc_dev;
} else {
/*
......@@ -748,6 +797,9 @@ EXPORT_SYMBOL_GPL(fsl_mc_device_add);
*/
void fsl_mc_device_remove(struct fsl_mc_device *mc_dev)
{
kfree(mc_dev->driver_override);
mc_dev->driver_override = NULL;
/*
* The device-specific remove callback will get invoked by device_del()
*/
......@@ -908,9 +960,6 @@ static int fsl_mc_bus_probe(struct platform_device *pdev)
u32 mc_portal_size, mc_stream_id;
struct resource *plat_res;
if (!iommu_present(&fsl_mc_bus_type))
return -EPROBE_DEFER;
mc = devm_kzalloc(&pdev->dev, sizeof(*mc), GFP_KERNEL);
if (!mc)
return -ENOMEM;
......@@ -918,11 +967,11 @@ static int fsl_mc_bus_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, mc);
plat_res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
mc->fsl_mc_regs = devm_ioremap_resource(&pdev->dev, plat_res);
if (IS_ERR(mc->fsl_mc_regs))
return PTR_ERR(mc->fsl_mc_regs);
if (plat_res)
mc->fsl_mc_regs = devm_ioremap_resource(&pdev->dev, plat_res);
if (IS_ENABLED(CONFIG_ACPI) && !dev_of_node(&pdev->dev)) {
if (mc->fsl_mc_regs && IS_ENABLED(CONFIG_ACPI) &&
!dev_of_node(&pdev->dev)) {
mc_stream_id = readl(mc->fsl_mc_regs + FSL_MC_FAPR);
/*
* HW ORs the PL and BMT bit, places the result in bit 15 of
......
......@@ -80,10 +80,12 @@ int dpmcp_reset(struct fsl_mc_io *mc_io,
/* DPRC command versioning */
#define DPRC_CMD_BASE_VERSION 1
#define DPRC_CMD_2ND_VERSION 2
#define DPRC_CMD_3RD_VERSION 3
#define DPRC_CMD_ID_OFFSET 4
#define DPRC_CMD(id) (((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_BASE_VERSION)
#define DPRC_CMD_V2(id) (((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_2ND_VERSION)
#define DPRC_CMD_V3(id) (((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_3RD_VERSION)
/* DPRC command IDs */
#define DPRC_CMDID_CLOSE DPRC_CMD(0x800)
......@@ -91,6 +93,8 @@ int dpmcp_reset(struct fsl_mc_io *mc_io,
#define DPRC_CMDID_GET_API_VERSION DPRC_CMD(0xa05)
#define DPRC_CMDID_GET_ATTR DPRC_CMD(0x004)
#define DPRC_CMDID_RESET_CONT DPRC_CMD(0x005)
#define DPRC_CMDID_RESET_CONT_V2 DPRC_CMD_V2(0x005)
#define DPRC_CMDID_SET_IRQ DPRC_CMD(0x010)
#define DPRC_CMDID_SET_IRQ_ENABLE DPRC_CMD(0x012)
......@@ -103,6 +107,7 @@ int dpmcp_reset(struct fsl_mc_io *mc_io,
#define DPRC_CMDID_GET_OBJ DPRC_CMD(0x15A)
#define DPRC_CMDID_GET_OBJ_REG DPRC_CMD(0x15E)
#define DPRC_CMDID_GET_OBJ_REG_V2 DPRC_CMD_V2(0x15E)
#define DPRC_CMDID_GET_OBJ_REG_V3 DPRC_CMD_V3(0x15E)
#define DPRC_CMDID_SET_OBJ_IRQ DPRC_CMD(0x15F)
#define DPRC_CMDID_GET_CONNECTION DPRC_CMD(0x16C)
......@@ -111,6 +116,11 @@ struct dprc_cmd_open {
__le32 container_id;
};
struct dprc_cmd_reset_container {
__le32 child_container_id;
__le32 options;
};
struct dprc_cmd_set_irq {
/* cmd word 0 */
__le32 irq_val;
......@@ -152,8 +162,7 @@ struct dprc_cmd_clear_irq_status {
struct dprc_rsp_get_attributes {
/* response word 0 */
__le32 container_id;
__le16 icid;
__le16 pad;
__le32 icid;
/* response word 1 */
__le32 options;
__le32 portal_id;
......@@ -330,7 +339,7 @@ int dprc_clear_irq_status(struct fsl_mc_io *mc_io,
*/
struct dprc_attributes {
int container_id;
u16 icid;
u32 icid;
int portal_id;
u64 options;
};
......@@ -358,12 +367,6 @@ int dprc_set_obj_irq(struct fsl_mc_io *mc_io,
int obj_id,
u8 irq_index,
struct dprc_irq_cfg *irq_cfg);
/* Region flags */
/* Cacheable - Indicates that region should be mapped as cacheable */
#define DPRC_REGION_CACHEABLE 0x00000001
#define DPRC_REGION_SHAREABLE 0x00000002
/**
* enum dprc_region_type - Region type
* @DPRC_REGION_TYPE_MC_PORTAL: MC portal region
......@@ -518,11 +521,6 @@ struct dpcon_cmd_set_notification {
__le64 user_ctx;
};
/**
* Maximum number of total IRQs that can be pre-allocated for an MC bus'
* IRQ pool
*/
#define FSL_MC_IRQ_POOL_MAX_TOTAL_IRQS 256
/**
* struct fsl_mc_resource_pool - Pool of MC resources of a given
......@@ -597,11 +595,6 @@ void fsl_mc_msi_domain_free_irqs(struct device *dev);
struct irq_domain *fsl_mc_find_msi_domain(struct device *dev);
int fsl_mc_populate_irq_pool(struct fsl_mc_bus *mc_bus,
unsigned int irq_count);
void fsl_mc_cleanup_irq_pool(struct fsl_mc_bus *mc_bus);
int __must_check fsl_create_mc_io(struct device *dev,
phys_addr_t mc_portal_phys_addr,
u32 mc_portal_size,
......
......@@ -129,7 +129,12 @@ int __must_check fsl_create_mc_io(struct device *dev,
*/
void fsl_destroy_mc_io(struct fsl_mc_io *mc_io)
{
struct fsl_mc_device *dpmcp_dev = mc_io->dpmcp_dev;
struct fsl_mc_device *dpmcp_dev;
if (!mc_io)
return;
dpmcp_dev = mc_io->dpmcp_dev;
if (dpmcp_dev)
fsl_mc_io_unset_dpmcp(mc_io);
......
......@@ -6,9 +6,17 @@
#
config MHI_BUS
tristate "Modem Host Interface (MHI) bus"
help
Bus driver for MHI protocol. Modem Host Interface (MHI) is a
communication protocol used by the host processors to control
and communicate with modem devices over a high speed peripheral
bus or shared memory.
tristate "Modem Host Interface (MHI) bus"
help
Bus driver for MHI protocol. Modem Host Interface (MHI) is a
communication protocol used by the host processors to control
and communicate with modem devices over a high speed peripheral
bus or shared memory.
config MHI_BUS_DEBUG
bool "Debugfs support for the MHI bus"
depends on MHI_BUS && DEBUG_FS
help
Enable debugfs support for use with the MHI transport. Allows
reading and/or modifying some values within the MHI controller
for debug and test purposes.
obj-$(CONFIG_MHI_BUS) := mhi.o
obj-$(CONFIG_MHI_BUS) += mhi.o
mhi-y := init.o main.o pm.o boot.o
mhi-$(CONFIG_MHI_BUS_DEBUG) += debugfs.o
......@@ -392,13 +392,28 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
void *buf;
dma_addr_t dma_addr;
size_t size;
int ret;
int i, ret;
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
dev_err(dev, "Device MHI is not in valid state\n");
return;
}
/* save hardware info from BHI */
ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_SERIALNU,
&mhi_cntrl->serial_number);
if (ret)
dev_err(dev, "Could not capture serial number via BHI\n");
for (i = 0; i < ARRAY_SIZE(mhi_cntrl->oem_pk_hash); i++) {
ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_OEMPKHASH(i),
&mhi_cntrl->oem_pk_hash[i]);
if (ret) {
dev_err(dev, "Could not capture OEM PK HASH via BHI\n");
break;
}
}
/* If device is in pass through, do reset to ready state transition */
if (mhi_cntrl->ee == MHI_EE_PTHRU)
goto fw_load_ee_pthru;
......
This diff is collapsed.
......@@ -4,6 +4,7 @@
*
*/
#include <linux/debugfs.h>
#include <linux/device.h>
#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
......@@ -75,6 +76,42 @@ const char *to_mhi_pm_state_str(enum mhi_pm_state state)
return mhi_pm_state_str[index];
}
static ssize_t serial_number_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct mhi_device *mhi_dev = to_mhi_device(dev);
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
return snprintf(buf, PAGE_SIZE, "Serial Number: %u\n",
mhi_cntrl->serial_number);
}
static DEVICE_ATTR_RO(serial_number);
static ssize_t oem_pk_hash_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct mhi_device *mhi_dev = to_mhi_device(dev);
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
int i, cnt = 0;
for (i = 0; i < ARRAY_SIZE(mhi_cntrl->oem_pk_hash); i++)
cnt += snprintf(buf + cnt, PAGE_SIZE - cnt,
"OEMPKHASH[%d]: 0x%x\n", i,
mhi_cntrl->oem_pk_hash[i]);
return cnt;
}
static DEVICE_ATTR_RO(oem_pk_hash);
static struct attribute *mhi_dev_attrs[] = {
&dev_attr_serial_number.attr,
&dev_attr_oem_pk_hash.attr,
NULL,
};
ATTRIBUTE_GROUPS(mhi_dev);
/* MHI protocol requires the transfer ring to be aligned with ring length */
static int mhi_alloc_aligned_ring(struct mhi_controller *mhi_cntrl,
struct mhi_ring *ring,
......@@ -125,6 +162,13 @@ int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl)
if (mhi_event->offload_ev)
continue;
if (mhi_event->irq >= mhi_cntrl->nr_irqs) {
dev_err(dev, "irq %d not available for event ring\n",
mhi_event->irq);
ret = -EINVAL;
goto error_request;
}
ret = request_irq(mhi_cntrl->irq[mhi_event->irq],
mhi_irq_handler,
IRQF_SHARED | IRQF_NO_SUSPEND,
......@@ -562,10 +606,10 @@ int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
}
static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
struct mhi_controller_config *config)
const struct mhi_controller_config *config)
{
struct mhi_event *mhi_event;
struct mhi_event_config *event_cfg;
const struct mhi_event_config *event_cfg;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
int i, num;
......@@ -636,9 +680,6 @@ static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
mhi_event++;
}
/* We need IRQ for each event ring + additional one for BHI */
mhi_cntrl->nr_irqs_req = mhi_cntrl->total_ev_rings + 1;
return 0;
error_ev_cfg:
......@@ -648,9 +689,9 @@ static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
}
static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
struct mhi_controller_config *config)
const struct mhi_controller_config *config)
{
struct mhi_channel_config *ch_cfg;
const struct mhi_channel_config *ch_cfg;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
int i;
u32 chan;
......@@ -766,7 +807,7 @@ static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
}
static int parse_config(struct mhi_controller *mhi_cntrl,
struct mhi_controller_config *config)
const struct mhi_controller_config *config)
{
int ret;
......@@ -803,7 +844,7 @@ static int parse_config(struct mhi_controller *mhi_cntrl,
}
int mhi_register_controller(struct mhi_controller *mhi_cntrl,
struct mhi_controller_config *config)
const struct mhi_controller_config *config)
{
struct mhi_event *mhi_event;
struct mhi_chan *mhi_chan;
......@@ -904,6 +945,7 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
mhi_dev->dev_type = MHI_DEVICE_CONTROLLER;
mhi_dev->mhi_cntrl = mhi_cntrl;
dev_set_name(&mhi_dev->dev, "%s", dev_name(mhi_cntrl->cntrl_dev));
mhi_dev->name = dev_name(mhi_cntrl->cntrl_dev);
/* Init wakeup source */
device_init_wakeup(&mhi_dev->dev, true);
......@@ -914,6 +956,8 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
mhi_cntrl->mhi_dev = mhi_dev;
mhi_create_debugfs(mhi_cntrl);
return 0;
error_add_dev:
......@@ -936,6 +980,8 @@ void mhi_unregister_controller(struct mhi_controller *mhi_cntrl)
struct mhi_chan *mhi_chan = mhi_cntrl->mhi_chan;
unsigned int i;
mhi_destroy_debugfs(mhi_cntrl);
kfree(mhi_cntrl->mhi_cmd);
kfree(mhi_cntrl->mhi_event);
......@@ -953,6 +999,22 @@ void mhi_unregister_controller(struct mhi_controller *mhi_cntrl)
}
EXPORT_SYMBOL_GPL(mhi_unregister_controller);
struct mhi_controller *mhi_alloc_controller(void)
{
struct mhi_controller *mhi_cntrl;
mhi_cntrl = kzalloc(sizeof(*mhi_cntrl), GFP_KERNEL);
return mhi_cntrl;
}
EXPORT_SYMBOL_GPL(mhi_alloc_controller);
void mhi_free_controller(struct mhi_controller *mhi_cntrl)
{
kfree(mhi_cntrl);
}
EXPORT_SYMBOL_GPL(mhi_free_controller);
int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
{
struct device *dev = &mhi_cntrl->mhi_dev->dev;
......@@ -1249,7 +1311,7 @@ static int mhi_uevent(struct device *dev, struct kobj_uevent_env *env)
struct mhi_device *mhi_dev = to_mhi_device(dev);
return add_uevent_var(env, "MODALIAS=" MHI_DEVICE_MODALIAS_FMT,
mhi_dev->chan_name);
mhi_dev->name);
}
static int mhi_match(struct device *dev, struct device_driver *drv)
......@@ -1266,7 +1328,7 @@ static int mhi_match(struct device *dev, struct device_driver *drv)
return 0;
for (id = mhi_drv->id_table; id->chan[0]; id++)
if (!strcmp(mhi_dev->chan_name, id->chan)) {
if (!strcmp(mhi_dev->name, id->chan)) {
mhi_dev->id = id;
return 1;
}
......@@ -1279,15 +1341,18 @@ struct bus_type mhi_bus_type = {
.dev_name = "mhi",
.match = mhi_match,
.uevent = mhi_uevent,
.dev_groups = mhi_dev_groups,
};
static int __init mhi_init(void)
{
mhi_debugfs_init();
return bus_register(&mhi_bus_type);
}
static void __exit mhi_exit(void)
{
mhi_debugfs_exit();
bus_unregister(&mhi_bus_type);
}
......
......@@ -570,6 +570,30 @@ struct mhi_chan {
/* Default MHI timeout */
#define MHI_TIMEOUT_MS (1000)
/* debugfs related functions */
#ifdef CONFIG_MHI_BUS_DEBUG
void mhi_create_debugfs(struct mhi_controller *mhi_cntrl);
void mhi_destroy_debugfs(struct mhi_controller *mhi_cntrl);
void mhi_debugfs_init(void);
void mhi_debugfs_exit(void);
#else
static inline void mhi_create_debugfs(struct mhi_controller *mhi_cntrl)
{
}
static inline void mhi_destroy_debugfs(struct mhi_controller *mhi_cntrl)
{
}
static inline void mhi_debugfs_init(void)
{
}
static inline void mhi_debugfs_exit(void)
{
}
#endif
struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
int mhi_destroy_device(struct device *dev, void *data);
......@@ -592,13 +616,24 @@ void mhi_pm_st_worker(struct work_struct *work);
void mhi_pm_sys_err_handler(struct mhi_controller *mhi_cntrl);
void mhi_fw_load_worker(struct work_struct *work);
int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl);
void mhi_ctrl_ev_task(unsigned long data);
int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl);
void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl);
int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl);
int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl);
int mhi_send_cmd(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
enum mhi_cmd_type cmd);
static inline bool mhi_is_active(struct mhi_controller *mhi_cntrl)
{
return (mhi_cntrl->dev_state >= MHI_STATE_M0 &&
mhi_cntrl->dev_state <= MHI_STATE_M3_FAST);
}
static inline void mhi_trigger_resume(struct mhi_controller *mhi_cntrl)
{
pm_wakeup_event(&mhi_cntrl->mhi_dev->dev, 0);
mhi_cntrl->runtime_get(mhi_cntrl);
mhi_cntrl->runtime_put(mhi_cntrl);
}
/* Register access methods */
void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg,
......
......@@ -249,7 +249,7 @@ int mhi_destroy_device(struct device *dev, void *data)
put_device(&mhi_dev->dl_chan->mhi_dev->dev);
dev_dbg(&mhi_cntrl->mhi_dev->dev, "destroy device for chan:%s\n",
mhi_dev->chan_name);
mhi_dev->name);
/* Notify the client and remove the device from MHI bus */
device_del(dev);
......@@ -327,10 +327,10 @@ void mhi_create_devices(struct mhi_controller *mhi_cntrl)
}
/* Channel name is same for both UL and DL */
mhi_dev->chan_name = mhi_chan->name;
mhi_dev->name = mhi_chan->name;
dev_set_name(&mhi_dev->dev, "%s_%s",
dev_name(mhi_cntrl->cntrl_dev),
mhi_dev->chan_name);
mhi_dev->name);
/* Init wakeup source if available */
if (mhi_dev->dl_chan && mhi_dev->dl_chan->wake_capable)
......@@ -909,8 +909,7 @@ void mhi_ctrl_ev_task(unsigned long data)
* process it since we are probably in a suspended state,
* so trigger a resume.
*/
mhi_cntrl->runtime_get(mhi_cntrl);
mhi_cntrl->runtime_put(mhi_cntrl);
mhi_trigger_resume(mhi_cntrl);
return;
}
......@@ -971,10 +970,8 @@ int mhi_queue_skb(struct mhi_device *mhi_dev, enum dma_data_direction dir,
}
/* we're in M3 or transitioning to M3 */
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
mhi_cntrl->runtime_get(mhi_cntrl);
mhi_cntrl->runtime_put(mhi_cntrl);
}
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
mhi_trigger_resume(mhi_cntrl);
/* Toggle wake to exit out of M2 */
mhi_cntrl->wake_toggle(mhi_cntrl);
......@@ -1032,10 +1029,8 @@ int mhi_queue_dma(struct mhi_device *mhi_dev, enum dma_data_direction dir,
}
/* we're in M3 or transitioning to M3 */
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
mhi_cntrl->runtime_get(mhi_cntrl);
mhi_cntrl->runtime_put(mhi_cntrl);
}
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
mhi_trigger_resume(mhi_cntrl);
/* Toggle wake to exit out of M2 */
mhi_cntrl->wake_toggle(mhi_cntrl);
......@@ -1147,10 +1142,8 @@ int mhi_queue_buf(struct mhi_device *mhi_dev, enum dma_data_direction dir,
read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
/* we're in M3 or transitioning to M3 */
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
mhi_cntrl->runtime_get(mhi_cntrl);
mhi_cntrl->runtime_put(mhi_cntrl);
}
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
mhi_trigger_resume(mhi_cntrl);
/* Toggle wake to exit out of M2 */
mhi_cntrl->wake_toggle(mhi_cntrl);
......
......@@ -256,6 +256,7 @@ int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl)
dev_err(dev, "Unable to transition to M0 state\n");
return -EIO;
}
mhi_cntrl->M0++;
/* Wake up the device */
read_lock_bh(&mhi_cntrl->pm_lock);
......@@ -326,6 +327,8 @@ void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl)
mhi_cntrl->dev_state = MHI_STATE_M2;
write_unlock_irq(&mhi_cntrl->pm_lock);
mhi_cntrl->M2++;
wake_up_all(&mhi_cntrl->state_event);
/* If there are any pending resources, exit M2 immediately */
......@@ -362,6 +365,7 @@ int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl)
return -EIO;
}
mhi_cntrl->M3++;
wake_up_all(&mhi_cntrl->state_event);
return 0;
......@@ -686,7 +690,8 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
return -EIO;
/* Return busy if there are any pending resources */
if (atomic_read(&mhi_cntrl->dev_wake))
if (atomic_read(&mhi_cntrl->dev_wake) ||
atomic_read(&mhi_cntrl->pending_pkts))
return -EBUSY;
/* Take MHI out of M2 state */
......@@ -712,7 +717,8 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
write_lock_irq(&mhi_cntrl->pm_lock);
if (atomic_read(&mhi_cntrl->dev_wake)) {
if (atomic_read(&mhi_cntrl->dev_wake) ||
atomic_read(&mhi_cntrl->pending_pkts)) {
write_unlock_irq(&mhi_cntrl->pm_lock);
return -EBUSY;
}
......@@ -822,11 +828,8 @@ int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl)
/* Wake up the device */
read_lock_bh(&mhi_cntrl->pm_lock);
mhi_cntrl->wake_get(mhi_cntrl, true);
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
pm_wakeup_event(&mhi_cntrl->mhi_dev->dev, 0);
mhi_cntrl->runtime_get(mhi_cntrl);
mhi_cntrl->runtime_put(mhi_cntrl);
}
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
mhi_trigger_resume(mhi_cntrl);
read_unlock_bh(&mhi_cntrl->pm_lock);
ret = wait_event_timeout(mhi_cntrl->state_event,
......@@ -915,7 +918,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
dev_info(dev, "Requested to power ON\n");
if (mhi_cntrl->nr_irqs < mhi_cntrl->total_ev_rings)
if (mhi_cntrl->nr_irqs < 1)
return -EINVAL;
/* Supply default wake routines if not provided by controller driver */
......@@ -1113,6 +1116,9 @@ void mhi_device_get(struct mhi_device *mhi_dev)
mhi_dev->dev_wake++;
read_lock_bh(&mhi_cntrl->pm_lock);
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
mhi_trigger_resume(mhi_cntrl);
mhi_cntrl->wake_get(mhi_cntrl, true);
read_unlock_bh(&mhi_cntrl->pm_lock);
}
......@@ -1137,10 +1143,8 @@ void mhi_device_put(struct mhi_device *mhi_dev)
mhi_dev->dev_wake--;
read_lock_bh(&mhi_cntrl->pm_lock);
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
mhi_cntrl->runtime_get(mhi_cntrl);
mhi_cntrl->runtime_put(mhi_cntrl);
}
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
mhi_trigger_resume(mhi_cntrl);
mhi_cntrl->wake_put(mhi_cntrl, false);
read_unlock_bh(&mhi_cntrl->pm_lock);
......
......@@ -93,8 +93,9 @@ config PPDEV
config VIRTIO_CONSOLE
tristate "Virtio console"
depends on VIRTIO && TTY
depends on TTY
select HVC_DRIVER
select VIRTIO
help
Virtio console for use with hypervisors.
......
......@@ -853,8 +853,10 @@ static void lp_console_write(struct console *co, const char *s,
count--;
do {
written = parport_write(port, crlf, i);
if (written > 0)
i -= written, crlf += written;
if (written > 0) {
i -= written;
crlf += written;
}
} while (i > 0 && (CONSOLE_LP_STRICT || written > 0));
}
} while (count > 0 && (CONSOLE_LP_STRICT || written > 0));
......
......@@ -726,6 +726,33 @@ static ssize_t read_iter_zero(struct kiocb *iocb, struct iov_iter *iter)
return written;
}
static ssize_t read_zero(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
size_t cleared = 0;
while (count) {
size_t chunk = min_t(size_t, count, PAGE_SIZE);
size_t left;
left = clear_user(buf + cleared, chunk);
if (unlikely(left)) {
cleared += (chunk - left);
if (!cleared)
return -EFAULT;
break;
}
cleared += chunk;
count -= chunk;
if (signal_pending(current))
break;
cond_resched();
}
return cleared;
}
static int mmap_zero(struct file *file, struct vm_area_struct *vma)
{
#ifndef CONFIG_MMU
......@@ -921,6 +948,7 @@ static const struct file_operations zero_fops = {
.llseek = zero_lseek,
.write = write_zero,
.read_iter = read_iter_zero,
.read = read_zero,
.write_iter = write_iter_zero,
.mmap = mmap_zero,
.get_unmapped_area = get_unmapped_area_zero,
......
......@@ -195,10 +195,7 @@ mspec_mmap(struct file *file, struct vm_area_struct *vma,
pages = vma_pages(vma);
vdata_size = sizeof(struct vma_data) + pages * sizeof(long);
if (vdata_size <= PAGE_SIZE)
vdata = kzalloc(vdata_size, GFP_KERNEL);
else
vdata = vzalloc(vdata_size);
vdata = kvzalloc(vdata_size, GFP_KERNEL);
if (!vdata)
return -ENOMEM;
......
......@@ -491,18 +491,7 @@ static struct platform_driver axp288_extcon_driver = {
.pm = &axp288_extcon_pm_ops,
},
};
static int __init axp288_extcon_init(void)
{
return platform_driver_register(&axp288_extcon_driver);
}
module_init(axp288_extcon_init);
static void __exit axp288_extcon_exit(void)
{
platform_driver_unregister(&axp288_extcon_driver);
}
module_exit(axp288_extcon_exit);
module_platform_driver(axp288_extcon_driver);
MODULE_AUTHOR("Ramakrishna Pallala <ramakrishna.pallala@intel.com>");
MODULE_AUTHOR("Hans de Goede <hdegoede@redhat.com>");
......
......@@ -713,7 +713,7 @@ static int max14577_muic_probe(struct platform_device *pdev)
max14577_extcon_cable);
if (IS_ERR(info->edev)) {
dev_err(&pdev->dev, "failed to allocate memory for extcon\n");
return -ENOMEM;
return PTR_ERR(info->edev);
}
ret = devm_extcon_dev_register(&pdev->dev, info->edev);
......
......@@ -1157,7 +1157,7 @@ static int max77693_muic_probe(struct platform_device *pdev)
max77693_extcon_cable);
if (IS_ERR(info->edev)) {
dev_err(&pdev->dev, "failed to allocate memory for extcon\n");
return -ENOMEM;
return PTR_ERR(info->edev);
}
ret = devm_extcon_dev_register(&pdev->dev, info->edev);
......
......@@ -845,7 +845,7 @@ static int max77843_muic_probe(struct platform_device *pdev)
max77843_extcon_cable);
if (IS_ERR(info->edev)) {
dev_err(&pdev->dev, "Failed to allocate memory for extcon\n");
ret = -ENODEV;
ret = PTR_ERR(info->edev);
goto err_muic_irq;
}
......
......@@ -674,7 +674,7 @@ static int max8997_muic_probe(struct platform_device *pdev)
info->edev = devm_extcon_dev_allocate(&pdev->dev, max8997_extcon_cable);
if (IS_ERR(info->edev)) {
dev_err(&pdev->dev, "failed to allocate memory for extcon\n");
ret = -ENOMEM;
ret = PTR_ERR(info->edev);
goto err_irq;
}
......
......@@ -2,7 +2,7 @@
/*
* Palmas USB transceiver driver
*
* Copyright (C) 2013 Texas Instruments Incorporated - http://www.ti.com
* Copyright (C) 2013 Texas Instruments Incorporated - https://www.ti.com
* Author: Graeme Gregory <gg@slimlogic.co.uk>
* Author: Kishon Vijay Abraham I <kishon@ti.com>
* Based on twl6030_usb.c
......@@ -205,21 +205,15 @@ static int palmas_usb_probe(struct platform_device *pdev)
palmas_usb->id_gpiod = devm_gpiod_get_optional(&pdev->dev, "id",
GPIOD_IN);
if (PTR_ERR(palmas_usb->id_gpiod) == -EPROBE_DEFER) {
return -EPROBE_DEFER;
} else if (IS_ERR(palmas_usb->id_gpiod)) {
dev_err(&pdev->dev, "failed to get id gpio\n");
return PTR_ERR(palmas_usb->id_gpiod);
}
if (IS_ERR(palmas_usb->id_gpiod))
return dev_err_probe(&pdev->dev, PTR_ERR(palmas_usb->id_gpiod),
"failed to get id gpio\n");
palmas_usb->vbus_gpiod = devm_gpiod_get_optional(&pdev->dev, "vbus",
GPIOD_IN);
if (PTR_ERR(palmas_usb->vbus_gpiod) == -EPROBE_DEFER) {
return -EPROBE_DEFER;
} else if (IS_ERR(palmas_usb->vbus_gpiod)) {
dev_err(&pdev->dev, "failed to get vbus gpio\n");
return PTR_ERR(palmas_usb->vbus_gpiod);
}
if (IS_ERR(palmas_usb->vbus_gpiod))
return dev_err_probe(&pdev->dev, PTR_ERR(palmas_usb->vbus_gpiod),
"failed to get id gpio\n");
if (palmas_usb->enable_id_detection && palmas_usb->id_gpiod) {
palmas_usb->enable_id_detection = false;
......
This diff is collapsed.
......@@ -2,7 +2,7 @@
/**
* drivers/extcon/extcon-usb-gpio.c - USB GPIO extcon driver
*
* Copyright (C) 2015 Texas Instruments Incorporated - http://www.ti.com
* Copyright (C) 2015 Texas Instruments Incorporated - https://www.ti.com
* Author: Roger Quadros <rogerq@ti.com>
*/
......
......@@ -148,7 +148,7 @@ struct fme_perf_priv {
struct device *dev;
void __iomem *ioaddr;
struct pmu pmu;
u64 id;
u16 id;
u32 fab_users;
u32 fab_port_id;
......
......@@ -31,12 +31,12 @@ struct cci_drvdata {
struct dfl_fpga_cdev *cdev; /* container device */
};
static void __iomem *cci_pci_ioremap_bar(struct pci_dev *pcidev, int bar)
static void __iomem *cci_pci_ioremap_bar0(struct pci_dev *pcidev)
{
if (pcim_iomap_regions(pcidev, BIT(bar), DRV_NAME))
if (pcim_iomap_regions(pcidev, BIT(0), DRV_NAME))
return NULL;
return pcim_iomap_table(pcidev)[bar];
return pcim_iomap_table(pcidev)[0];
}
static int cci_pci_alloc_irq(struct pci_dev *pcidev)
......@@ -156,8 +156,8 @@ static int cci_enumerate_feature_devs(struct pci_dev *pcidev)
goto irq_free_exit;
}
/* start to find Device Feature List from Bar 0 */
base = cci_pci_ioremap_bar(pcidev, 0);
/* start to find Device Feature List in Bar 0 */
base = cci_pci_ioremap_bar0(pcidev);
if (!base) {
ret = -ENOMEM;
goto irq_free_exit;
......@@ -172,7 +172,7 @@ static int cci_enumerate_feature_devs(struct pci_dev *pcidev)
start = pci_resource_start(pcidev, 0);
len = pci_resource_len(pcidev, 0);
dfl_fpga_enum_info_add_dfl(info, start, len, base);
dfl_fpga_enum_info_add_dfl(info, start, len);
/*
* find more Device Feature Lists (e.g. Ports) per information
......@@ -196,26 +196,24 @@ static int cci_enumerate_feature_devs(struct pci_dev *pcidev)
*/
bar = FIELD_GET(FME_PORT_OFST_BAR_ID, v);
offset = FIELD_GET(FME_PORT_OFST_DFH_OFST, v);
base = cci_pci_ioremap_bar(pcidev, bar);
if (!base)
continue;
start = pci_resource_start(pcidev, bar) + offset;
len = pci_resource_len(pcidev, bar) - offset;
dfl_fpga_enum_info_add_dfl(info, start, len,
base + offset);
dfl_fpga_enum_info_add_dfl(info, start, len);
}
} else if (dfl_feature_is_port(base)) {
start = pci_resource_start(pcidev, 0);
len = pci_resource_len(pcidev, 0);
dfl_fpga_enum_info_add_dfl(info, start, len, base);
dfl_fpga_enum_info_add_dfl(info, start, len);
} else {
ret = -ENODEV;
goto irq_free_exit;
}
/* release I/O mappings for next step enumeration */
pcim_iounmap_regions(pcidev, BIT(0));
/* start enumeration with prepared enumeration information */
cdev = dfl_fpga_feature_devs_enumerate(info);
if (IS_ERR(cdev)) {
......
This diff is collapsed.
......@@ -197,7 +197,7 @@ int dfl_fpga_check_port_id(struct platform_device *pdev, void *pport_id);
* @id: unique dfl private feature id.
*/
struct dfl_feature_id {
u64 id;
u16 id;
};
/**
......@@ -236,16 +236,18 @@ struct dfl_feature_irq_ctx {
* @irq_ctx: interrupt context list.
* @nr_irqs: number of interrupt contexts.
* @ops: ops of this sub feature.
* @ddev: ptr to the dfl device of this sub feature.
* @priv: priv data of this feature.
*/
struct dfl_feature {
struct platform_device *dev;
u64 id;
u16 id;
int resource_index;
void __iomem *ioaddr;
struct dfl_feature_irq_ctx *irq_ctx;
unsigned int nr_irqs;
const struct dfl_feature_ops *ops;
struct dfl_device *ddev;
void *priv;
};
......@@ -365,7 +367,7 @@ struct platform_device *dfl_fpga_inode_to_feature_dev(struct inode *inode)
(feature) < (pdata)->features + (pdata)->num; (feature)++)
static inline
struct dfl_feature *dfl_get_feature_by_id(struct device *dev, u64 id)
struct dfl_feature *dfl_get_feature_by_id(struct device *dev, u16 id)
{
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
struct dfl_feature *feature;
......@@ -378,7 +380,7 @@ struct dfl_feature *dfl_get_feature_by_id(struct device *dev, u64 id)
}
static inline
void __iomem *dfl_get_feature_ioaddr_by_id(struct device *dev, u64 id)
void __iomem *dfl_get_feature_ioaddr_by_id(struct device *dev, u16 id)
{
struct dfl_feature *feature = dfl_get_feature_by_id(dev, id);
......@@ -389,7 +391,7 @@ void __iomem *dfl_get_feature_ioaddr_by_id(struct device *dev, u64 id)
return NULL;
}
static inline bool is_dfl_feature_present(struct device *dev, u64 id)
static inline bool is_dfl_feature_present(struct device *dev, u16 id)
{
return !!dfl_get_feature_ioaddr_by_id(dev, id);
}
......@@ -441,22 +443,17 @@ struct dfl_fpga_enum_info {
*
* @start: base address of this device feature list.
* @len: size of this device feature list.
* @ioaddr: mapped base address of this device feature list.
* @node: node in list of device feature lists.
*/
struct dfl_fpga_enum_dfl {
resource_size_t start;
resource_size_t len;
void __iomem *ioaddr;
struct list_head node;
};
struct dfl_fpga_enum_info *dfl_fpga_enum_info_alloc(struct device *dev);
int dfl_fpga_enum_info_add_dfl(struct dfl_fpga_enum_info *info,
resource_size_t start, resource_size_t len,
void __iomem *ioaddr);
resource_size_t start, resource_size_t len);
int dfl_fpga_enum_info_add_irq(struct dfl_fpga_enum_info *info,
unsigned int nr_irqs, int *irq_table);
void dfl_fpga_enum_info_free(struct dfl_fpga_enum_info *info);
......@@ -519,4 +516,88 @@ long dfl_feature_ioctl_set_irq(struct platform_device *pdev,
struct dfl_feature *feature,
unsigned long arg);
/**
* enum dfl_id_type - define the DFL FIU types
*/
enum dfl_id_type {
FME_ID,
PORT_ID,
DFL_ID_MAX,
};
/**
* struct dfl_device_id - dfl device identifier
* @type: contains 4 bits DFL FIU type of the device. See enum dfl_id_type.
* @feature_id: contains 12 bits feature identifier local to its DFL FIU type.
* @driver_data: driver specific data.
*/
struct dfl_device_id {
u8 type;
u16 feature_id;
unsigned long driver_data;
};
/**
* struct dfl_device - represent an dfl device on dfl bus
*
* @dev: generic device interface.
* @id: id of the dfl device.
* @type: type of DFL FIU of the device. See enum dfl_id_type.
* @feature_id: 16 bits feature identifier local to its DFL FIU type.
* @mmio_res: mmio resource of this dfl device.
* @irqs: list of Linux IRQ numbers of this dfl device.
* @num_irqs: number of IRQs supported by this dfl device.
* @cdev: pointer to DFL FPGA container device this dfl device belongs to.
* @id_entry: matched id entry in dfl driver's id table.
*/
struct dfl_device {
struct device dev;
int id;
u8 type;
u16 feature_id;
struct resource mmio_res;
int *irqs;
unsigned int num_irqs;
struct dfl_fpga_cdev *cdev;
const struct dfl_device_id *id_entry;
};
/**
* struct dfl_driver - represent an dfl device driver
*
* @drv: driver model structure.
* @id_table: pointer to table of device IDs the driver is interested in.
* { } member terminated.
* @probe: mandatory callback for device binding.
* @remove: callback for device unbinding.
*/
struct dfl_driver {
struct device_driver drv;
const struct dfl_device_id *id_table;
int (*probe)(struct dfl_device *dfl_dev);
void (*remove)(struct dfl_device *dfl_dev);
};
#define to_dfl_dev(d) container_of(d, struct dfl_device, dev)
#define to_dfl_drv(d) container_of(d, struct dfl_driver, drv)
/*
* use a macro to avoid include chaining to get THIS_MODULE.
*/
#define dfl_driver_register(drv) \
__dfl_driver_register(drv, THIS_MODULE)
int __dfl_driver_register(struct dfl_driver *dfl_drv, struct module *owner);
void dfl_driver_unregister(struct dfl_driver *dfl_drv);
/*
* module_dfl_driver() - Helper macro for drivers that don't do
* anything special in module init/exit. This eliminates a lot of
* boilerplate. Each module may only use this macro once, and
* calling it replaces module_init() and module_exit().
*/
#define module_dfl_driver(__dfl_driver) \
module_driver(__dfl_driver, dfl_driver_register, \
dfl_driver_unregister)
#endif /* __FPGA_DFL_H */
// SPDX-License-Identifier: GPL-2.0
/*
* FPGA Region - Device Tree support for FPGA programming under Linux
* FPGA Region - Support for FPGA programming under Linux
*
* Copyright (C) 2013-2016 Altera Corporation
* Copyright (C) 2017 Intel Corporation
......
......@@ -196,17 +196,13 @@ static int s10_ops_write_init(struct fpga_manager *mgr,
if (ret < 0)
goto init_done;
ret = wait_for_completion_interruptible_timeout(
ret = wait_for_completion_timeout(
&priv->status_return_completion, S10_RECONFIG_TIMEOUT);
if (!ret) {
dev_err(dev, "timeout waiting for RECONFIG_REQUEST\n");
ret = -ETIMEDOUT;
goto init_done;
}
if (ret < 0) {
dev_err(dev, "error (%d) waiting for RECONFIG_REQUEST\n", ret);
goto init_done;
}
ret = 0;
if (!test_and_clear_bit(SVC_STATUS_OK, &priv->status)) {
......@@ -318,7 +314,7 @@ static int s10_ops_write(struct fpga_manager *mgr, const char *buf,
*/
wait_status = 1; /* not timed out */
if (!priv->status)
wait_status = wait_for_completion_interruptible_timeout(
wait_status = wait_for_completion_timeout(
&priv->status_return_completion,
S10_BUFFER_TIMEOUT);
......@@ -340,13 +336,6 @@ static int s10_ops_write(struct fpga_manager *mgr, const char *buf,
ret = -ETIMEDOUT;
break;
}
if (wait_status < 0) {
ret = wait_status;
dev_err(dev,
"error (%d) waiting for svc layer buffers\n",
ret);
break;
}
}
if (!s10_free_buffers(mgr))
......@@ -372,7 +361,7 @@ static int s10_ops_write_complete(struct fpga_manager *mgr,
if (ret < 0)
break;
ret = wait_for_completion_interruptible_timeout(
ret = wait_for_completion_timeout(
&priv->status_return_completion, timeout);
if (!ret) {
dev_err(dev,
......@@ -380,12 +369,6 @@ static int s10_ops_write_complete(struct fpga_manager *mgr,
ret = -ETIMEDOUT;
break;
}
if (ret < 0) {
dev_err(dev,
"error (%d) waiting for RECONFIG_COMPLETED\n",
ret);
break;
}
/* Not error or timeout, so ret is # of jiffies until timeout */
timeout = ret;
ret = 0;
......
......@@ -27,11 +27,22 @@ struct xilinx_spi_conf {
struct gpio_desc *done;
};
static enum fpga_mgr_states xilinx_spi_state(struct fpga_manager *mgr)
static int get_done_gpio(struct fpga_manager *mgr)
{
struct xilinx_spi_conf *conf = mgr->priv;
int ret;
ret = gpiod_get_value(conf->done);
if (ret < 0)
dev_err(&mgr->dev, "Error reading DONE (%d)\n", ret);
return ret;
}
if (!gpiod_get_value(conf->done))
static enum fpga_mgr_states xilinx_spi_state(struct fpga_manager *mgr)
{
if (!get_done_gpio(mgr))
return FPGA_MGR_STATE_RESET;
return FPGA_MGR_STATE_UNKNOWN;
......@@ -57,11 +68,21 @@ static int wait_for_init_b(struct fpga_manager *mgr, int value,
if (conf->init_b) {
while (time_before(jiffies, timeout)) {
/* dump_state(conf, "wait for init_d .."); */
if (gpiod_get_value(conf->init_b) == value)
int ret = gpiod_get_value(conf->init_b);
if (ret == value)
return 0;
if (ret < 0) {
dev_err(&mgr->dev, "Error reading INIT_B (%d)\n", ret);
return ret;
}
usleep_range(100, 400);
}
dev_err(&mgr->dev, "Timeout waiting for INIT_B to %s\n",
value ? "assert" : "deassert");
return -ETIMEDOUT;
}
......@@ -78,7 +99,7 @@ static int xilinx_spi_write_init(struct fpga_manager *mgr,
int err;
if (info->flags & FPGA_MGR_PARTIAL_RECONFIG) {
dev_err(&mgr->dev, "Partial reconfiguration not supported.\n");
dev_err(&mgr->dev, "Partial reconfiguration not supported\n");
return -EINVAL;
}
......@@ -86,7 +107,6 @@ static int xilinx_spi_write_init(struct fpga_manager *mgr,
err = wait_for_init_b(mgr, 1, 1); /* min is 500 ns */
if (err) {
dev_err(&mgr->dev, "INIT_B pin did not go low\n");
gpiod_set_value(conf->prog_b, 0);
return err;
}
......@@ -94,12 +114,10 @@ static int xilinx_spi_write_init(struct fpga_manager *mgr,
gpiod_set_value(conf->prog_b, 0);
err = wait_for_init_b(mgr, 0, 0);
if (err) {
dev_err(&mgr->dev, "INIT_B pin did not go high\n");
if (err)
return err;
}
if (gpiod_get_value(conf->done)) {
if (get_done_gpio(mgr)) {
dev_err(&mgr->dev, "Unexpected DONE pin state...\n");
return -EIO;
}
......@@ -152,25 +170,46 @@ static int xilinx_spi_write_complete(struct fpga_manager *mgr,
struct fpga_image_info *info)
{
struct xilinx_spi_conf *conf = mgr->priv;
unsigned long timeout;
unsigned long timeout = jiffies + usecs_to_jiffies(info->config_complete_timeout_us);
bool expired = false;
int done;
int ret;
if (gpiod_get_value(conf->done))
return xilinx_spi_apply_cclk_cycles(conf);
/*
* This loop is carefully written such that if the driver is
* scheduled out for more than 'timeout', we still check for DONE
* before giving up and we apply 8 extra CCLK cycles in all cases.
*/
while (!expired) {
expired = time_after(jiffies, timeout);
timeout = jiffies + usecs_to_jiffies(info->config_complete_timeout_us);
while (time_before(jiffies, timeout)) {
done = get_done_gpio(mgr);
if (done < 0)
return done;
ret = xilinx_spi_apply_cclk_cycles(conf);
if (ret)
return ret;
if (gpiod_get_value(conf->done))
return xilinx_spi_apply_cclk_cycles(conf);
if (done)
return 0;
}
if (conf->init_b) {
ret = gpiod_get_value(conf->init_b);
if (ret < 0) {
dev_err(&mgr->dev, "Error reading INIT_B (%d)\n", ret);
return ret;
}
dev_err(&mgr->dev,
ret ? "CRC error or invalid device\n"
: "Missing sync word or incomplete bitstream\n");
} else {
dev_err(&mgr->dev, "Timeout after config data transfer\n");
}
dev_err(&mgr->dev, "Timeout after config data transfer.\n");
return -ETIMEDOUT;
}
......
This diff is collapsed.
This diff is collapsed.
......@@ -838,7 +838,7 @@ static int load_copro_firmware(struct fsi_master_acf *master)
rc = request_firmware(&fw, FW_FILE_NAME, master->dev);
if (rc) {
dev_err(
master->dev, "Error %d to load firwmare '%s' !\n",
master->dev, "Error %d to load firmware '%s' !\n",
rc, FW_FILE_NAME);
return rc;
}
......@@ -1039,7 +1039,8 @@ static void fsi_master_acf_setup_external(struct fsi_master_acf *master)
gpiod_direction_input(master->gpio_data);
}
static int fsi_master_acf_link_enable(struct fsi_master *_master, int link)
static int fsi_master_acf_link_enable(struct fsi_master *_master, int link,
bool enable)
{
struct fsi_master_acf *master = to_fsi_master_acf(_master);
int rc = -EBUSY;
......@@ -1049,7 +1050,7 @@ static int fsi_master_acf_link_enable(struct fsi_master *_master, int link)
mutex_lock(&master->lock);
if (!master->external_mode) {
gpiod_set_value(master->gpio_enable, 1);
gpiod_set_value(master->gpio_enable, enable ? 1 : 0);
rc = 0;
}
mutex_unlock(&master->lock);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment