Commit f678d6da authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'char-misc-5.2-rc1-part2' of...

Merge tag 'char-misc-5.2-rc1-part2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc update part 2 from Greg KH:
 "Here is the "real" big set of char/misc driver patches for 5.2-rc1

  Loads of different driver subsystem stuff in here, all over the places:
   - thunderbolt driver updates
   - habanalabs driver updates
   - nvmem driver updates
   - extcon driver updates
   - intel_th driver updates
   - mei driver updates
   - coresight driver updates
   - soundwire driver cleanups and updates
   - fastrpc driver updates
   - other minor driver updates
   - chardev minor fixups

  Feels like this tree is getting to be a dumping ground of "small
  driver subsystems" these days. Which is fine with me, if it makes
  things easier for those subsystem maintainers.

  All of these have been in linux-next for a while with no reported
  issues"

* tag 'char-misc-5.2-rc1-part2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (255 commits)
  intel_th: msu: Add current window tracking
  intel_th: msu: Add a sysfs attribute to trigger window switch
  intel_th: msu: Correct the block wrap detection
  intel_th: Add switch triggering support
  intel_th: gth: Factor out trace start/stop
  intel_th: msu: Factor out pipeline draining
  intel_th: msu: Switch over to scatterlist
  intel_th: msu: Replace open-coded list_{first,last,next}_entry variants
  intel_th: Only report useful IRQs to subdevices
  intel_th: msu: Start handling IRQs
  intel_th: pci: Use MSI interrupt signalling
  intel_th: Communicate IRQ via resource
  intel_th: Add "rtit" source device
  intel_th: Skip subdevices if their MMIO is missing
  intel_th: Rework resource passing between glue layers and core
  intel_th: SPDX-ify the documentation
  intel_th: msu: Fix single mode with IOMMU
  coresight: funnel: Support static funnel
  dt-bindings: arm: coresight: Unify funnel DT binding
  coresight: replicator: Add new device id for static replicator
  ...
parents 2310673c aad14ad3
...@@ -6,6 +6,8 @@ Description: ...@@ -6,6 +6,8 @@ Description:
This file allows user to read/write the raw NVMEM contents. This file allows user to read/write the raw NVMEM contents.
Permissions for write to this file depends on the nvmem Permissions for write to this file depends on the nvmem
provider configuration. provider configuration.
Note: This file is only present if CONFIG_NVMEM_SYSFS
is enabled
ex: ex:
hexdump /sys/bus/nvmem/devices/qfprom0/nvmem hexdump /sys/bus/nvmem/devices/qfprom0/nvmem
......
...@@ -30,4 +30,12 @@ Description: (RW) Configure MSC buffer size for "single" or "multi" modes. ...@@ -30,4 +30,12 @@ Description: (RW) Configure MSC buffer size for "single" or "multi" modes.
there are no active users and tracing is not enabled) and then there are no active users and tracing is not enabled) and then
allocates a new one. allocates a new one.
What: /sys/bus/intel_th/devices/<intel_th_id>-msc<msc-id>/win_switch
Date: May 2019
KernelVersion: 5.2
Contact: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Description: (RW) Trigger window switch for the MSC's buffer, in
multi-window mode. In "multi" mode, accepts writes of "1", thereby
triggering a window switch for the buffer. Returns an error in any
other operating mode or attempts to write something other than "1".
...@@ -65,3 +65,18 @@ Description: Display the ME firmware version. ...@@ -65,3 +65,18 @@ Description: Display the ME firmware version.
<platform>:<major>.<minor>.<milestone>.<build_no>. <platform>:<major>.<minor>.<milestone>.<build_no>.
There can be up to three such blocks for different There can be up to three such blocks for different
FW components. FW components.
What: /sys/class/mei/meiN/dev_state
Date: Mar 2019
KernelVersion: 5.1
Contact: Tomas Winkler <tomas.winkler@intel.com>
Description: Display the ME device state.
The device state can have following values:
INITIALIZING
INIT_CLIENTS
ENABLED
RESETTING
DISABLED
POWER_DOWN
POWER_UP
...@@ -8,7 +8,8 @@ through the intermediate links connecting the source to the currently selected ...@@ -8,7 +8,8 @@ through the intermediate links connecting the source to the currently selected
sink. Each CoreSight component device should use these properties to describe sink. Each CoreSight component device should use these properties to describe
its hardware characteristcs. its hardware characteristcs.
* Required properties for all components *except* non-configurable replicators: * Required properties for all components *except* non-configurable replicators
and non-configurable funnels:
* compatible: These have to be supplemented with "arm,primecell" as * compatible: These have to be supplemented with "arm,primecell" as
drivers are using the AMBA bus interface. Possible values include: drivers are using the AMBA bus interface. Possible values include:
...@@ -24,8 +25,10 @@ its hardware characteristcs. ...@@ -24,8 +25,10 @@ its hardware characteristcs.
discovered at boot time when the device is probed. discovered at boot time when the device is probed.
"arm,coresight-tmc", "arm,primecell"; "arm,coresight-tmc", "arm,primecell";
- Trace Funnel: - Trace Programmable Funnel:
"arm,coresight-funnel", "arm,primecell"; "arm,coresight-dynamic-funnel", "arm,primecell";
"arm,coresight-funnel", "arm,primecell"; (OBSOLETE. For
backward compatibility and will be removed)
- Embedded Trace Macrocell (version 3.x) and - Embedded Trace Macrocell (version 3.x) and
Program Flow Trace Macrocell: Program Flow Trace Macrocell:
...@@ -65,11 +68,17 @@ its hardware characteristcs. ...@@ -65,11 +68,17 @@ its hardware characteristcs.
"stm-stimulus-base", each corresponding to the areas defined in "reg". "stm-stimulus-base", each corresponding to the areas defined in "reg".
* Required properties for devices that don't show up on the AMBA bus, such as * Required properties for devices that don't show up on the AMBA bus, such as
non-configurable replicators: non-configurable replicators and non-configurable funnels:
* compatible: Currently supported value is (note the absence of the * compatible: Currently supported value is (note the absence of the
AMBA markee): AMBA markee):
- "arm,coresight-replicator" - Coresight Non-configurable Replicator:
"arm,coresight-static-replicator";
"arm,coresight-replicator"; (OBSOLETE. For backward
compatibility and will be removed)
- Coresight Non-configurable Funnel:
"arm,coresight-static-funnel";
* port or ports: see "Graph bindings for Coresight" below. * port or ports: see "Graph bindings for Coresight" below.
...@@ -169,7 +178,7 @@ Example: ...@@ -169,7 +178,7 @@ Example:
/* non-configurable replicators don't show up on the /* non-configurable replicators don't show up on the
* AMBA bus. As such no need to add "arm,primecell". * AMBA bus. As such no need to add "arm,primecell".
*/ */
compatible = "arm,coresight-replicator"; compatible = "arm,coresight-static-replicator";
out-ports { out-ports {
#address-cells = <1>; #address-cells = <1>;
...@@ -200,8 +209,45 @@ Example: ...@@ -200,8 +209,45 @@ Example:
}; };
}; };
funnel {
/*
* non-configurable funnel don't show up on the AMBA
* bus. As such no need to add "arm,primecell".
*/
compatible = "arm,coresight-static-funnel";
clocks = <&crg_ctrl HI3660_PCLK>;
clock-names = "apb_pclk";
out-ports {
port {
combo_funnel_out: endpoint {
remote-endpoint = <&top_funnel_in>;
};
};
};
in-ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
combo_funnel_in0: endpoint {
remote-endpoint = <&cluster0_etf_out>;
};
};
port@1 {
reg = <1>;
combo_funnel_in1: endpoint {
remote-endpoint = <&cluster1_etf_out>;
};
};
};
};
funnel@20040000 { funnel@20040000 {
compatible = "arm,coresight-funnel", "arm,primecell"; compatible = "arm,coresight-dynamic-funnel", "arm,primecell";
reg = <0 0x20040000 0 0x1000>; reg = <0 0x20040000 0 0x1000>;
clocks = <&oscclk6a>; clocks = <&oscclk6a>;
......
...@@ -9,6 +9,7 @@ Required properties: ...@@ -9,6 +9,7 @@ Required properties:
- compatible : Must be one of - compatible : Must be one of
"u-blox,neo-6m"
"u-blox,neo-8" "u-blox,neo-8"
"u-blox,neo-m8" "u-blox,neo-m8"
......
======================================================================
Device tree bindings for Aspeed AST2400/AST2500 PCI-to-AHB Bridge Control Driver
======================================================================
The bridge is available on platforms with the VGA enabled on the Aspeed device.
In this case, the host has access to a 64KiB window into all of the BMC's
memory. The BMC can disable this bridge. If the bridge is enabled, the host
has read access to all the regions of memory, however the host only has read
and write access depending on a register controlled by the BMC.
Required properties:
===================
- compatible: must be one of:
- "aspeed,ast2400-p2a-ctrl"
- "aspeed,ast2500-p2a-ctrl"
Optional properties:
===================
- memory-region: A phandle to a reserved_memory region to be used for the PCI
to AHB mapping
The p2a-control node should be the child of a syscon node with the required
property:
- compatible : Should be one of the following:
"aspeed,ast2400-scu", "syscon", "simple-mfd"
"aspeed,g4-scu", "syscon", "simple-mfd"
"aspeed,ast2500-scu", "syscon", "simple-mfd"
"aspeed,g5-scu", "syscon", "simple-mfd"
Example
===================
g4 Example
----------
syscon: scu@1e6e2000 {
compatible = "aspeed,ast2400-scu", "syscon", "simple-mfd";
reg = <0x1e6e2000 0x1a8>;
p2a: p2a-control {
compatible = "aspeed,ast2400-p2a-ctrl";
memory-region = <&reserved_memory>;
};
};
...@@ -8,11 +8,12 @@ Required properties: ...@@ -8,11 +8,12 @@ Required properties:
"allwinner,sun8i-h3-sid" "allwinner,sun8i-h3-sid"
"allwinner,sun50i-a64-sid" "allwinner,sun50i-a64-sid"
"allwinner,sun50i-h5-sid" "allwinner,sun50i-h5-sid"
"allwinner,sun50i-h6-sid"
- reg: Should contain registers location and length - reg: Should contain registers location and length
= Data cells = = Data cells =
Are child nodes of qfprom, bindings of which as described in Are child nodes of sunxi-sid, bindings of which as described in
bindings/nvmem/nvmem.txt bindings/nvmem/nvmem.txt
Example for sun4i: Example for sun4i:
......
Freescale i.MX6 On-Chip OTP Controller (OCOTP) device tree bindings Freescale i.MX6 On-Chip OTP Controller (OCOTP) device tree bindings
This binding represents the on-chip eFuse OTP controller found on This binding represents the on-chip eFuse OTP controller found on
i.MX6Q/D, i.MX6DL/S, i.MX6SL, i.MX6SX, i.MX6UL, i.MX6ULL/ULZ and i.MX6SLL SoCs. i.MX6Q/D, i.MX6DL/S, i.MX6SL, i.MX6SX, i.MX6UL, i.MX6ULL/ULZ, i.MX6SLL,
i.MX7D/S, i.MX7ULP and i.MX8MQ SoCs.
Required properties: Required properties:
- compatible: should be one of - compatible: should be one of
...@@ -13,6 +14,7 @@ Required properties: ...@@ -13,6 +14,7 @@ Required properties:
"fsl,imx7d-ocotp" (i.MX7D/S), "fsl,imx7d-ocotp" (i.MX7D/S),
"fsl,imx6sll-ocotp" (i.MX6SLL), "fsl,imx6sll-ocotp" (i.MX6SLL),
"fsl,imx7ulp-ocotp" (i.MX7ULP), "fsl,imx7ulp-ocotp" (i.MX7ULP),
"fsl,imx8mq-ocotp" (i.MX8MQ),
followed by "syscon". followed by "syscon".
- #address-cells : Should be 1 - #address-cells : Should be 1
- #size-cells : Should be 1 - #size-cells : Should be 1
......
STMicroelectronics STM32 Factory-programmed data device tree bindings
This represents STM32 Factory-programmed read only non-volatile area: locked
flash, OTP, read-only HW regs... This contains various information such as:
analog calibration data for temperature sensor (e.g. TS_CAL1, TS_CAL2),
internal vref (VREFIN_CAL), unique device ID...
Required properties:
- compatible: Should be one of:
"st,stm32f4-otp"
"st,stm32mp15-bsec"
- reg: Offset and length of factory-programmed area.
- #address-cells: Should be '<1>'.
- #size-cells: Should be '<1>'.
Optional Data cells:
- Must be child nodes as described in nvmem.txt.
Example on stm32f4:
romem: nvmem@1fff7800 {
compatible = "st,stm32f4-otp";
reg = <0x1fff7800 0x400>;
#address-cells = <1>;
#size-cells = <1>;
/* Data cells: ts_cal1 at 0x1fff7a2c */
ts_cal1: calib@22c {
reg = <0x22c 0x2>;
};
...
};
.. SPDX-License-Identifier: GPL-2.0
======================= =======================
Intel(R) Trace Hub (TH) Intel(R) Trace Hub (TH)
======================= =======================
......
...@@ -8068,6 +8068,7 @@ F: drivers/gpio/gpio-intel-mid.c ...@@ -8068,6 +8068,7 @@ F: drivers/gpio/gpio-intel-mid.c
INTERCONNECT API INTERCONNECT API
M: Georgi Djakov <georgi.djakov@linaro.org> M: Georgi Djakov <georgi.djakov@linaro.org>
L: linux-pm@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/interconnect/ F: Documentation/interconnect/
F: Documentation/devicetree/bindings/interconnect/ F: Documentation/devicetree/bindings/interconnect/
......
...@@ -3121,6 +3121,7 @@ static void binder_transaction(struct binder_proc *proc, ...@@ -3121,6 +3121,7 @@ static void binder_transaction(struct binder_proc *proc,
if (target_node && target_node->txn_security_ctx) { if (target_node && target_node->txn_security_ctx) {
u32 secid; u32 secid;
size_t added_size;
security_task_getsecid(proc->tsk, &secid); security_task_getsecid(proc->tsk, &secid);
ret = security_secid_to_secctx(secid, &secctx, &secctx_sz); ret = security_secid_to_secctx(secid, &secctx, &secctx_sz);
...@@ -3130,7 +3131,15 @@ static void binder_transaction(struct binder_proc *proc, ...@@ -3130,7 +3131,15 @@ static void binder_transaction(struct binder_proc *proc,
return_error_line = __LINE__; return_error_line = __LINE__;
goto err_get_secctx_failed; goto err_get_secctx_failed;
} }
extra_buffers_size += ALIGN(secctx_sz, sizeof(u64)); added_size = ALIGN(secctx_sz, sizeof(u64));
extra_buffers_size += added_size;
if (extra_buffers_size < added_size) {
/* integer overflow of extra_buffers_size */
return_error = BR_FAILED_REPLY;
return_error_param = EINVAL;
return_error_line = __LINE__;
goto err_bad_extra_size;
}
} }
trace_binder_transaction(reply, t, target_node); trace_binder_transaction(reply, t, target_node);
...@@ -3480,6 +3489,7 @@ static void binder_transaction(struct binder_proc *proc, ...@@ -3480,6 +3489,7 @@ static void binder_transaction(struct binder_proc *proc,
t->buffer->transaction = NULL; t->buffer->transaction = NULL;
binder_alloc_free_buf(&target_proc->alloc, t->buffer); binder_alloc_free_buf(&target_proc->alloc, t->buffer);
err_binder_alloc_buf_failed: err_binder_alloc_buf_failed:
err_bad_extra_size:
if (secctx) if (secctx)
security_release_secctx(secctx, secctx_sz); security_release_secctx(secctx, secctx_sz);
err_get_secctx_failed: err_get_secctx_failed:
......
...@@ -973,6 +973,8 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data) ...@@ -973,6 +973,8 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data)
if (ACPI_SUCCESS(status)) { if (ACPI_SUCCESS(status)) {
hdp->hd_phys_address = addr.address.minimum; hdp->hd_phys_address = addr.address.minimum;
hdp->hd_address = ioremap(addr.address.minimum, addr.address.address_length); hdp->hd_address = ioremap(addr.address.minimum, addr.address.address_length);
if (!hdp->hd_address)
return AE_ERROR;
if (hpet_is_known(hdp)) { if (hpet_is_known(hdp)) {
iounmap(hdp->hd_address); iounmap(hdp->hd_address);
......
...@@ -30,7 +30,7 @@ config EXTCON_ARIZONA ...@@ -30,7 +30,7 @@ config EXTCON_ARIZONA
config EXTCON_AXP288 config EXTCON_AXP288
tristate "X-Power AXP288 EXTCON support" tristate "X-Power AXP288 EXTCON support"
depends on MFD_AXP20X && USB_SUPPORT && X86 depends on MFD_AXP20X && USB_SUPPORT && X86 && ACPI
select USB_ROLE_SWITCH select USB_ROLE_SWITCH
help help
Say Y here to enable support for USB peripheral detection Say Y here to enable support for USB peripheral detection
...@@ -60,6 +60,13 @@ config EXTCON_INTEL_CHT_WC ...@@ -60,6 +60,13 @@ config EXTCON_INTEL_CHT_WC
Say Y here to enable extcon support for charger detection / control Say Y here to enable extcon support for charger detection / control
on the Intel Cherrytrail Whiskey Cove PMIC. on the Intel Cherrytrail Whiskey Cove PMIC.
config EXTCON_INTEL_MRFLD
tristate "Intel Merrifield Basin Cove PMIC extcon driver"
depends on INTEL_SOC_PMIC_MRFLD
help
Say Y here to enable extcon support for charger detection / control
on the Intel Merrifield Basin Cove PMIC.
config EXTCON_MAX14577 config EXTCON_MAX14577
tristate "Maxim MAX14577/77836 EXTCON Support" tristate "Maxim MAX14577/77836 EXTCON Support"
depends on MFD_MAX14577 depends on MFD_MAX14577
......
...@@ -11,6 +11,7 @@ obj-$(CONFIG_EXTCON_AXP288) += extcon-axp288.o ...@@ -11,6 +11,7 @@ obj-$(CONFIG_EXTCON_AXP288) += extcon-axp288.o
obj-$(CONFIG_EXTCON_GPIO) += extcon-gpio.o obj-$(CONFIG_EXTCON_GPIO) += extcon-gpio.o
obj-$(CONFIG_EXTCON_INTEL_INT3496) += extcon-intel-int3496.o obj-$(CONFIG_EXTCON_INTEL_INT3496) += extcon-intel-int3496.o
obj-$(CONFIG_EXTCON_INTEL_CHT_WC) += extcon-intel-cht-wc.o obj-$(CONFIG_EXTCON_INTEL_CHT_WC) += extcon-intel-cht-wc.o
obj-$(CONFIG_EXTCON_INTEL_MRFLD) += extcon-intel-mrfld.o
obj-$(CONFIG_EXTCON_MAX14577) += extcon-max14577.o obj-$(CONFIG_EXTCON_MAX14577) += extcon-max14577.o
obj-$(CONFIG_EXTCON_MAX3355) += extcon-max3355.o obj-$(CONFIG_EXTCON_MAX3355) += extcon-max3355.o
obj-$(CONFIG_EXTCON_MAX77693) += extcon-max77693.o obj-$(CONFIG_EXTCON_MAX77693) += extcon-max77693.o
......
...@@ -205,7 +205,7 @@ EXPORT_SYMBOL(devm_extcon_register_notifier); ...@@ -205,7 +205,7 @@ EXPORT_SYMBOL(devm_extcon_register_notifier);
/** /**
* devm_extcon_unregister_notifier() * devm_extcon_unregister_notifier()
- Resource-managed extcon_unregister_notifier() * - Resource-managed extcon_unregister_notifier()
* @dev: the device owning the extcon device being created * @dev: the device owning the extcon device being created
* @edev: the extcon device * @edev: the extcon device
* @id: the unique id among the extcon enumeration * @id: the unique id among the extcon enumeration
......
...@@ -1726,6 +1726,16 @@ static int arizona_extcon_remove(struct platform_device *pdev) ...@@ -1726,6 +1726,16 @@ static int arizona_extcon_remove(struct platform_device *pdev)
struct arizona_extcon_info *info = platform_get_drvdata(pdev); struct arizona_extcon_info *info = platform_get_drvdata(pdev);
struct arizona *arizona = info->arizona; struct arizona *arizona = info->arizona;
int jack_irq_rise, jack_irq_fall; int jack_irq_rise, jack_irq_fall;
bool change;
regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1,
ARIZONA_MICD_ENA, 0,
&change);
if (change) {
regulator_disable(info->micvdd);
pm_runtime_put(info->dev);
}
gpiod_put(info->micd_pol_gpio); gpiod_put(info->micd_pol_gpio);
......
...@@ -17,6 +17,8 @@ ...@@ -17,6 +17,8 @@
#include <linux/regmap.h> #include <linux/regmap.h>
#include <linux/slab.h> #include <linux/slab.h>
#include "extcon-intel.h"
#define CHT_WC_PHYCTRL 0x5e07 #define CHT_WC_PHYCTRL 0x5e07
#define CHT_WC_CHGRCTRL0 0x5e16 #define CHT_WC_CHGRCTRL0 0x5e16
...@@ -29,7 +31,15 @@ ...@@ -29,7 +31,15 @@
#define CHT_WC_CHGRCTRL0_DBPOFF BIT(6) #define CHT_WC_CHGRCTRL0_DBPOFF BIT(6)
#define CHT_WC_CHGRCTRL0_CHR_WDT_NOKICK BIT(7) #define CHT_WC_CHGRCTRL0_CHR_WDT_NOKICK BIT(7)
#define CHT_WC_CHGRCTRL1 0x5e17 #define CHT_WC_CHGRCTRL1 0x5e17
#define CHT_WC_CHGRCTRL1_FUSB_INLMT_100 BIT(0)
#define CHT_WC_CHGRCTRL1_FUSB_INLMT_150 BIT(1)
#define CHT_WC_CHGRCTRL1_FUSB_INLMT_500 BIT(2)
#define CHT_WC_CHGRCTRL1_FUSB_INLMT_900 BIT(3)
#define CHT_WC_CHGRCTRL1_FUSB_INLMT_1500 BIT(4)
#define CHT_WC_CHGRCTRL1_FTEMP_EVENT BIT(5)
#define CHT_WC_CHGRCTRL1_OTGMODE BIT(6)
#define CHT_WC_CHGRCTRL1_DBPEN BIT(7)
#define CHT_WC_USBSRC 0x5e29 #define CHT_WC_USBSRC 0x5e29
#define CHT_WC_USBSRC_STS_MASK GENMASK(1, 0) #define CHT_WC_USBSRC_STS_MASK GENMASK(1, 0)
...@@ -48,6 +58,13 @@ ...@@ -48,6 +58,13 @@
#define CHT_WC_USBSRC_TYPE_OTHER 8 #define CHT_WC_USBSRC_TYPE_OTHER 8
#define CHT_WC_USBSRC_TYPE_DCP_EXTPHY 9 #define CHT_WC_USBSRC_TYPE_DCP_EXTPHY 9
#define CHT_WC_CHGDISCTRL 0x5e2f
#define CHT_WC_CHGDISCTRL_OUT BIT(0)
/* 0 - open drain, 1 - regular push-pull output */
#define CHT_WC_CHGDISCTRL_DRV BIT(4)
/* 0 - pin is controlled by SW, 1 - by HW */
#define CHT_WC_CHGDISCTRL_FN BIT(6)
#define CHT_WC_PWRSRC_IRQ 0x6e03 #define CHT_WC_PWRSRC_IRQ 0x6e03
#define CHT_WC_PWRSRC_IRQ_MASK 0x6e0f #define CHT_WC_PWRSRC_IRQ_MASK 0x6e0f
#define CHT_WC_PWRSRC_STS 0x6e1e #define CHT_WC_PWRSRC_STS 0x6e1e
...@@ -65,15 +82,6 @@ ...@@ -65,15 +82,6 @@
#define CHT_WC_VBUS_GPIO_CTLO_DRV_OD BIT(4) #define CHT_WC_VBUS_GPIO_CTLO_DRV_OD BIT(4)
#define CHT_WC_VBUS_GPIO_CTLO_DIR_OUT BIT(5) #define CHT_WC_VBUS_GPIO_CTLO_DIR_OUT BIT(5)
enum cht_wc_usb_id {
USB_ID_OTG,
USB_ID_GND,
USB_ID_FLOAT,
USB_RID_A,
USB_RID_B,
USB_RID_C,
};
enum cht_wc_mux_select { enum cht_wc_mux_select {
MUX_SEL_PMIC = 0, MUX_SEL_PMIC = 0,
MUX_SEL_SOC, MUX_SEL_SOC,
...@@ -101,9 +109,9 @@ static int cht_wc_extcon_get_id(struct cht_wc_extcon_data *ext, int pwrsrc_sts) ...@@ -101,9 +109,9 @@ static int cht_wc_extcon_get_id(struct cht_wc_extcon_data *ext, int pwrsrc_sts)
{ {
switch ((pwrsrc_sts & CHT_WC_PWRSRC_USBID_MASK) >> CHT_WC_PWRSRC_USBID_SHIFT) { switch ((pwrsrc_sts & CHT_WC_PWRSRC_USBID_MASK) >> CHT_WC_PWRSRC_USBID_SHIFT) {
case CHT_WC_PWRSRC_RID_GND: case CHT_WC_PWRSRC_RID_GND:
return USB_ID_GND; return INTEL_USB_ID_GND;
case CHT_WC_PWRSRC_RID_FLOAT: case CHT_WC_PWRSRC_RID_FLOAT:
return USB_ID_FLOAT; return INTEL_USB_ID_FLOAT;
case CHT_WC_PWRSRC_RID_ACA: case CHT_WC_PWRSRC_RID_ACA:
default: default:
/* /*
...@@ -111,7 +119,7 @@ static int cht_wc_extcon_get_id(struct cht_wc_extcon_data *ext, int pwrsrc_sts) ...@@ -111,7 +119,7 @@ static int cht_wc_extcon_get_id(struct cht_wc_extcon_data *ext, int pwrsrc_sts)
* the USBID GPADC channel here and determine ACA role * the USBID GPADC channel here and determine ACA role
* based on that. * based on that.
*/ */
return USB_ID_FLOAT; return INTEL_USB_ID_FLOAT;
} }
} }
...@@ -198,6 +206,30 @@ static void cht_wc_extcon_set_5v_boost(struct cht_wc_extcon_data *ext, ...@@ -198,6 +206,30 @@ static void cht_wc_extcon_set_5v_boost(struct cht_wc_extcon_data *ext,
dev_err(ext->dev, "Error writing Vbus GPIO CTLO: %d\n", ret); dev_err(ext->dev, "Error writing Vbus GPIO CTLO: %d\n", ret);
} }
static void cht_wc_extcon_set_otgmode(struct cht_wc_extcon_data *ext,
bool enable)
{
unsigned int val = enable ? CHT_WC_CHGRCTRL1_OTGMODE : 0;
int ret;
ret = regmap_update_bits(ext->regmap, CHT_WC_CHGRCTRL1,
CHT_WC_CHGRCTRL1_OTGMODE, val);
if (ret)
dev_err(ext->dev, "Error updating CHGRCTRL1 reg: %d\n", ret);
}
static void cht_wc_extcon_enable_charging(struct cht_wc_extcon_data *ext,
bool enable)
{
unsigned int val = enable ? 0 : CHT_WC_CHGDISCTRL_OUT;
int ret;
ret = regmap_update_bits(ext->regmap, CHT_WC_CHGDISCTRL,
CHT_WC_CHGDISCTRL_OUT, val);
if (ret)
dev_err(ext->dev, "Error updating CHGDISCTRL reg: %d\n", ret);
}
/* Small helper to sync EXTCON_CHG_USB_SDP and EXTCON_USB state */ /* Small helper to sync EXTCON_CHG_USB_SDP and EXTCON_USB state */
static void cht_wc_extcon_set_state(struct cht_wc_extcon_data *ext, static void cht_wc_extcon_set_state(struct cht_wc_extcon_data *ext,
unsigned int cable, bool state) unsigned int cable, bool state)
...@@ -221,11 +253,17 @@ static void cht_wc_extcon_pwrsrc_event(struct cht_wc_extcon_data *ext) ...@@ -221,11 +253,17 @@ static void cht_wc_extcon_pwrsrc_event(struct cht_wc_extcon_data *ext)
} }
id = cht_wc_extcon_get_id(ext, pwrsrc_sts); id = cht_wc_extcon_get_id(ext, pwrsrc_sts);
if (id == USB_ID_GND) { if (id == INTEL_USB_ID_GND) {
cht_wc_extcon_enable_charging(ext, false);
cht_wc_extcon_set_otgmode(ext, true);
/* The 5v boost causes a false VBUS / SDP detect, skip */ /* The 5v boost causes a false VBUS / SDP detect, skip */
goto charger_det_done; goto charger_det_done;
} }
cht_wc_extcon_set_otgmode(ext, false);
cht_wc_extcon_enable_charging(ext, true);
/* Plugged into a host/charger or not connected? */ /* Plugged into a host/charger or not connected? */
if (!(pwrsrc_sts & CHT_WC_PWRSRC_VBUS)) { if (!(pwrsrc_sts & CHT_WC_PWRSRC_VBUS)) {
/* Route D+ and D- to PMIC for future charger detection */ /* Route D+ and D- to PMIC for future charger detection */
...@@ -248,7 +286,7 @@ static void cht_wc_extcon_pwrsrc_event(struct cht_wc_extcon_data *ext) ...@@ -248,7 +286,7 @@ static void cht_wc_extcon_pwrsrc_event(struct cht_wc_extcon_data *ext)
ext->previous_cable = cable; ext->previous_cable = cable;
} }
ext->usb_host = ((id == USB_ID_GND) || (id == USB_RID_A)); ext->usb_host = ((id == INTEL_USB_ID_GND) || (id == INTEL_USB_RID_A));
extcon_set_state_sync(ext->edev, EXTCON_USB_HOST, ext->usb_host); extcon_set_state_sync(ext->edev, EXTCON_USB_HOST, ext->usb_host);
} }
...@@ -278,6 +316,14 @@ static int cht_wc_extcon_sw_control(struct cht_wc_extcon_data *ext, bool enable) ...@@ -278,6 +316,14 @@ static int cht_wc_extcon_sw_control(struct cht_wc_extcon_data *ext, bool enable)
{ {
int ret, mask, val; int ret, mask, val;
val = enable ? 0 : CHT_WC_CHGDISCTRL_FN;
ret = regmap_update_bits(ext->regmap, CHT_WC_CHGDISCTRL,
CHT_WC_CHGDISCTRL_FN, val);
if (ret)
dev_err(ext->dev,
"Error setting sw control for CHGDIS pin: %d\n",
ret);
mask = CHT_WC_CHGRCTRL0_SWCONTROL | CHT_WC_CHGRCTRL0_CCSM_OFF; mask = CHT_WC_CHGRCTRL0_SWCONTROL | CHT_WC_CHGRCTRL0_CCSM_OFF;
val = enable ? mask : 0; val = enable ? mask : 0;
ret = regmap_update_bits(ext->regmap, CHT_WC_CHGRCTRL0, mask, val); ret = regmap_update_bits(ext->regmap, CHT_WC_CHGRCTRL0, mask, val);
...@@ -329,7 +375,10 @@ static int cht_wc_extcon_probe(struct platform_device *pdev) ...@@ -329,7 +375,10 @@ static int cht_wc_extcon_probe(struct platform_device *pdev)
/* Enable sw control */ /* Enable sw control */
ret = cht_wc_extcon_sw_control(ext, true); ret = cht_wc_extcon_sw_control(ext, true);
if (ret) if (ret)
return ret; goto disable_sw_control;
/* Disable charging by external battery charger */
cht_wc_extcon_enable_charging(ext, false);
/* Register extcon device */ /* Register extcon device */
ret = devm_extcon_dev_register(ext->dev, ext->edev); ret = devm_extcon_dev_register(ext->dev, ext->edev);
......
// SPDX-License-Identifier: GPL-2.0
/*
* extcon driver for Basin Cove PMIC
*
* Copyright (c) 2019, Intel Corporation.
* Author: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
*/
#include <linux/extcon-provider.h>
#include <linux/interrupt.h>
#include <linux/mfd/intel_soc_pmic.h>
#include <linux/mfd/intel_soc_pmic_mrfld.h>
#include <linux/mod_devicetable.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include "extcon-intel.h"
#define BCOVE_USBIDCTRL 0x19
#define BCOVE_USBIDCTRL_ID BIT(0)
#define BCOVE_USBIDCTRL_ACA BIT(1)
#define BCOVE_USBIDCTRL_ALL (BCOVE_USBIDCTRL_ID | BCOVE_USBIDCTRL_ACA)
#define BCOVE_USBIDSTS 0x1a
#define BCOVE_USBIDSTS_GND BIT(0)
#define BCOVE_USBIDSTS_RARBRC_MASK GENMASK(2, 1)
#define BCOVE_USBIDSTS_RARBRC_SHIFT 1
#define BCOVE_USBIDSTS_NO_ACA 0
#define BCOVE_USBIDSTS_R_ID_A 1
#define BCOVE_USBIDSTS_R_ID_B 2
#define BCOVE_USBIDSTS_R_ID_C 3
#define BCOVE_USBIDSTS_FLOAT BIT(3)
#define BCOVE_USBIDSTS_SHORT BIT(4)
#define BCOVE_CHGRIRQ_ALL (BCOVE_CHGRIRQ_VBUSDET | BCOVE_CHGRIRQ_DCDET | \
BCOVE_CHGRIRQ_BATTDET | BCOVE_CHGRIRQ_USBIDDET)
#define BCOVE_CHGRCTRL0 0x4b
#define BCOVE_CHGRCTRL0_CHGRRESET BIT(0)
#define BCOVE_CHGRCTRL0_EMRGCHREN BIT(1)
#define BCOVE_CHGRCTRL0_EXTCHRDIS BIT(2)
#define BCOVE_CHGRCTRL0_SWCONTROL BIT(3)
#define BCOVE_CHGRCTRL0_TTLCK BIT(4)
#define BCOVE_CHGRCTRL0_BIT_5 BIT(5)
#define BCOVE_CHGRCTRL0_BIT_6 BIT(6)
#define BCOVE_CHGRCTRL0_CHR_WDT_NOKICK BIT(7)
struct mrfld_extcon_data {
struct device *dev;
struct regmap *regmap;
struct extcon_dev *edev;
unsigned int status;
unsigned int id;
};
static const unsigned int mrfld_extcon_cable[] = {
EXTCON_USB,
EXTCON_USB_HOST,
EXTCON_CHG_USB_SDP,
EXTCON_CHG_USB_CDP,
EXTCON_CHG_USB_DCP,
EXTCON_CHG_USB_ACA,
EXTCON_NONE,
};
static int mrfld_extcon_clear(struct mrfld_extcon_data *data, unsigned int reg,
unsigned int mask)
{
return regmap_update_bits(data->regmap, reg, mask, 0x00);
}
static int mrfld_extcon_set(struct mrfld_extcon_data *data, unsigned int reg,
unsigned int mask)
{
return regmap_update_bits(data->regmap, reg, mask, 0xff);
}
static int mrfld_extcon_sw_control(struct mrfld_extcon_data *data, bool enable)
{
unsigned int mask = BCOVE_CHGRCTRL0_SWCONTROL;
struct device *dev = data->dev;
int ret;
if (enable)
ret = mrfld_extcon_set(data, BCOVE_CHGRCTRL0, mask);
else
ret = mrfld_extcon_clear(data, BCOVE_CHGRCTRL0, mask);
if (ret)
dev_err(dev, "can't set SW control: %d\n", ret);
return ret;
}
static int mrfld_extcon_get_id(struct mrfld_extcon_data *data)
{
struct regmap *regmap = data->regmap;
unsigned int id;
bool ground;
int ret;
ret = regmap_read(regmap, BCOVE_USBIDSTS, &id);
if (ret)
return ret;
if (id & BCOVE_USBIDSTS_FLOAT)
return INTEL_USB_ID_FLOAT;
switch ((id & BCOVE_USBIDSTS_RARBRC_MASK) >> BCOVE_USBIDSTS_RARBRC_SHIFT) {
case BCOVE_USBIDSTS_R_ID_A:
return INTEL_USB_RID_A;
case BCOVE_USBIDSTS_R_ID_B:
return INTEL_USB_RID_B;
case BCOVE_USBIDSTS_R_ID_C:
return INTEL_USB_RID_C;
}
/*
* PMIC A0 reports USBIDSTS_GND = 1 for ID_GND,
* but PMIC B0 reports USBIDSTS_GND = 0 for ID_GND.
* Thus we must check this bit at last.
*/
ground = id & BCOVE_USBIDSTS_GND;
switch ('A' + BCOVE_MAJOR(data->id)) {
case 'A':
return ground ? INTEL_USB_ID_GND : INTEL_USB_ID_FLOAT;
case 'B':
return ground ? INTEL_USB_ID_FLOAT : INTEL_USB_ID_GND;
}
/* Unknown or unsupported type */
return INTEL_USB_ID_FLOAT;
}
static int mrfld_extcon_role_detect(struct mrfld_extcon_data *data)
{
unsigned int id;
bool usb_host;
int ret;
ret = mrfld_extcon_get_id(data);
if (ret < 0)
return ret;
id = ret;
usb_host = (id == INTEL_USB_ID_GND) || (id == INTEL_USB_RID_A);
extcon_set_state_sync(data->edev, EXTCON_USB_HOST, usb_host);
return 0;
}
static int mrfld_extcon_cable_detect(struct mrfld_extcon_data *data)
{
struct regmap *regmap = data->regmap;
unsigned int status, change;
int ret;
/*
* It seems SCU firmware clears the content of BCOVE_CHGRIRQ1
* and makes it useless for OS. Instead we compare a previously
* stored status to the current one, provided by BCOVE_SCHGRIRQ1.
*/
ret = regmap_read(regmap, BCOVE_SCHGRIRQ1, &status);
if (ret)
return ret;
change = status ^ data->status;
if (!change)
return -ENODATA;
if (change & BCOVE_CHGRIRQ_USBIDDET) {
ret = mrfld_extcon_role_detect(data);
if (ret)
return ret;
}
data->status = status;
return 0;
}
static irqreturn_t mrfld_extcon_interrupt(int irq, void *dev_id)
{
struct mrfld_extcon_data *data = dev_id;
int ret;
ret = mrfld_extcon_cable_detect(data);
mrfld_extcon_clear(data, BCOVE_MIRQLVL1, BCOVE_LVL1_CHGR);
return ret ? IRQ_NONE: IRQ_HANDLED;
}
static int mrfld_extcon_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct intel_soc_pmic *pmic = dev_get_drvdata(dev->parent);
struct regmap *regmap = pmic->regmap;
struct mrfld_extcon_data *data;
unsigned int id;
int irq, ret;
irq = platform_get_irq(pdev, 0);
if (irq < 0)
return irq;
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
data->dev = dev;
data->regmap = regmap;
data->edev = devm_extcon_dev_allocate(dev, mrfld_extcon_cable);
if (IS_ERR(data->edev))
return -ENOMEM;
ret = devm_extcon_dev_register(dev, data->edev);
if (ret < 0) {
dev_err(dev, "can't register extcon device: %d\n", ret);
return ret;
}
ret = devm_request_threaded_irq(dev, irq, NULL, mrfld_extcon_interrupt,
IRQF_ONESHOT | IRQF_SHARED, pdev->name,
data);
if (ret) {
dev_err(dev, "can't register IRQ handler: %d\n", ret);
return ret;
}
ret = regmap_read(regmap, BCOVE_ID, &id);
if (ret) {
dev_err(dev, "can't read PMIC ID: %d\n", ret);
return ret;
}
data->id = id;
ret = mrfld_extcon_sw_control(data, true);
if (ret)
return ret;
/* Get initial state */
mrfld_extcon_role_detect(data);
mrfld_extcon_clear(data, BCOVE_MIRQLVL1, BCOVE_LVL1_CHGR);
mrfld_extcon_clear(data, BCOVE_MCHGRIRQ1, BCOVE_CHGRIRQ_ALL);
mrfld_extcon_set(data, BCOVE_USBIDCTRL, BCOVE_USBIDCTRL_ALL);
platform_set_drvdata(pdev, data);
return 0;
}
static int mrfld_extcon_remove(struct platform_device *pdev)
{
struct mrfld_extcon_data *data = platform_get_drvdata(pdev);
mrfld_extcon_sw_control(data, false);
return 0;
}
static const struct platform_device_id mrfld_extcon_id_table[] = {
{ .name = "mrfld_bcove_pwrsrc" },
{}
};
MODULE_DEVICE_TABLE(platform, mrfld_extcon_id_table);
static struct platform_driver mrfld_extcon_driver = {
.driver = {
.name = "mrfld_bcove_pwrsrc",
},
.probe = mrfld_extcon_probe,
.remove = mrfld_extcon_remove,
.id_table = mrfld_extcon_id_table,
};
module_platform_driver(mrfld_extcon_driver);
MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>");
MODULE_DESCRIPTION("extcon driver for Intel Merrifield Basin Cove PMIC");
MODULE_LICENSE("GPL v2");
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Header file for Intel extcon hardware
*
* Copyright (C) 2019 Intel Corporation. All rights reserved.
*/
#ifndef __EXTCON_INTEL_H__
#define __EXTCON_INTEL_H__
enum extcon_intel_usb_id {
INTEL_USB_ID_OTG,
INTEL_USB_ID_GND,
INTEL_USB_ID_FLOAT,
INTEL_USB_RID_A,
INTEL_USB_RID_B,
INTEL_USB_RID_C,
};
#endif /* __EXTCON_INTEL_H__ */
...@@ -254,7 +254,7 @@ static int vpd_section_destroy(struct vpd_section *sec) ...@@ -254,7 +254,7 @@ static int vpd_section_destroy(struct vpd_section *sec)
static int vpd_sections_init(phys_addr_t physaddr) static int vpd_sections_init(phys_addr_t physaddr)
{ {
struct vpd_cbmem __iomem *temp; struct vpd_cbmem *temp;
struct vpd_cbmem header; struct vpd_cbmem header;
int ret = 0; int ret = 0;
...@@ -262,7 +262,7 @@ static int vpd_sections_init(phys_addr_t physaddr) ...@@ -262,7 +262,7 @@ static int vpd_sections_init(phys_addr_t physaddr)
if (!temp) if (!temp)
return -ENOMEM; return -ENOMEM;
memcpy_fromio(&header, temp, sizeof(struct vpd_cbmem)); memcpy(&header, temp, sizeof(struct vpd_cbmem));
memunmap(temp); memunmap(temp);
if (header.magic != VPD_CBMEM_MAGIC) if (header.magic != VPD_CBMEM_MAGIC)
......
...@@ -130,6 +130,7 @@ static void ubx_remove(struct serdev_device *serdev) ...@@ -130,6 +130,7 @@ static void ubx_remove(struct serdev_device *serdev)
#ifdef CONFIG_OF #ifdef CONFIG_OF
static const struct of_device_id ubx_of_match[] = { static const struct of_device_id ubx_of_match[] = {
{ .compatible = "u-blox,neo-6m" },
{ .compatible = "u-blox,neo-8" }, { .compatible = "u-blox,neo-8" },
{ .compatible = "u-blox,neo-m8" }, { .compatible = "u-blox,neo-m8" },
{}, {},
......
...@@ -75,20 +75,13 @@ config CORESIGHT_SOURCE_ETM4X ...@@ -75,20 +75,13 @@ config CORESIGHT_SOURCE_ETM4X
bool "CoreSight Embedded Trace Macrocell 4.x driver" bool "CoreSight Embedded Trace Macrocell 4.x driver"
depends on ARM64 depends on ARM64
select CORESIGHT_LINKS_AND_SINKS select CORESIGHT_LINKS_AND_SINKS
select PID_IN_CONTEXTIDR
help help
This driver provides support for the ETM4.x tracer module, tracing the This driver provides support for the ETM4.x tracer module, tracing the
instructions that a processor is executing. This is primarily useful instructions that a processor is executing. This is primarily useful
for instruction level tracing. Depending on the implemented version for instruction level tracing. Depending on the implemented version
data tracing may also be available. data tracing may also be available.
config CORESIGHT_DYNAMIC_REPLICATOR
bool "CoreSight Programmable Replicator driver"
depends on CORESIGHT_LINKS_AND_SINKS
help
This enables support for dynamic CoreSight replicator link driver.
The programmable ATB replicator allows independent filtering of the
trace data based on the traceid.
config CORESIGHT_STM config CORESIGHT_STM
bool "CoreSight System Trace Macrocell driver" bool "CoreSight System Trace Macrocell driver"
depends on (ARM && !(CPU_32v3 || CPU_32v4 || CPU_32v4T)) || ARM64 depends on (ARM && !(CPU_32v3 || CPU_32v4 || CPU_32v4T)) || ARM64
......
...@@ -15,7 +15,6 @@ obj-$(CONFIG_CORESIGHT_SOURCE_ETM3X) += coresight-etm3x.o coresight-etm-cp14.o \ ...@@ -15,7 +15,6 @@ obj-$(CONFIG_CORESIGHT_SOURCE_ETM3X) += coresight-etm3x.o coresight-etm-cp14.o \
coresight-etm3x-sysfs.o coresight-etm3x-sysfs.o
obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o \ obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o \
coresight-etm4x-sysfs.o coresight-etm4x-sysfs.o
obj-$(CONFIG_CORESIGHT_DYNAMIC_REPLICATOR) += coresight-dynamic-replicator.o
obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o
obj-$(CONFIG_CORESIGHT_CPU_DEBUG) += coresight-cpu-debug.o obj-$(CONFIG_CORESIGHT_CPU_DEBUG) += coresight-cpu-debug.o
obj-$(CONFIG_CORESIGHT_CATU) += coresight-catu.o obj-$(CONFIG_CORESIGHT_CATU) += coresight-catu.o
...@@ -485,12 +485,12 @@ static int catu_disable(struct coresight_device *csdev, void *__unused) ...@@ -485,12 +485,12 @@ static int catu_disable(struct coresight_device *csdev, void *__unused)
return rc; return rc;
} }
const struct coresight_ops_helper catu_helper_ops = { static const struct coresight_ops_helper catu_helper_ops = {
.enable = catu_enable, .enable = catu_enable,
.disable = catu_disable, .disable = catu_disable,
}; };
const struct coresight_ops catu_ops = { static const struct coresight_ops catu_ops = {
.helper_ops = &catu_helper_ops, .helper_ops = &catu_helper_ops,
}; };
...@@ -557,8 +557,9 @@ static int catu_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -557,8 +557,9 @@ static int catu_probe(struct amba_device *adev, const struct amba_id *id)
drvdata->csdev = coresight_register(&catu_desc); drvdata->csdev = coresight_register(&catu_desc);
if (IS_ERR(drvdata->csdev)) if (IS_ERR(drvdata->csdev))
ret = PTR_ERR(drvdata->csdev); ret = PTR_ERR(drvdata->csdev);
else
pm_runtime_put(&adev->dev);
out: out:
pm_runtime_put(&adev->dev);
return ret; return ret;
} }
......
...@@ -109,11 +109,6 @@ static inline bool coresight_is_catu_device(struct coresight_device *csdev) ...@@ -109,11 +109,6 @@ static inline bool coresight_is_catu_device(struct coresight_device *csdev)
return true; return true;
} }
#ifdef CONFIG_CORESIGHT_CATU
extern const struct etr_buf_operations etr_catu_buf_ops; extern const struct etr_buf_operations etr_catu_buf_ops;
#else
/* Dummy declaration for the CATU ops */
static const struct etr_buf_operations etr_catu_buf_ops;
#endif
#endif #endif
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2011-2015, The Linux Foundation. All rights reserved.
*/
#include <linux/amba/bus.h>
#include <linux/clk.h>
#include <linux/coresight.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/of.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include "coresight-priv.h"
#define REPLICATOR_IDFILTER0 0x000
#define REPLICATOR_IDFILTER1 0x004
/**
* struct replicator_state - specifics associated to a replicator component
* @base: memory mapped base address for this component.
* @dev: the device entity associated with this component
* @atclk: optional clock for the core parts of the replicator.
* @csdev: component vitals needed by the framework
*/
struct replicator_state {
void __iomem *base;
struct device *dev;
struct clk *atclk;
struct coresight_device *csdev;
};
/*
* replicator_reset : Reset the replicator configuration to sane values.
*/
static void replicator_reset(struct replicator_state *drvdata)
{
CS_UNLOCK(drvdata->base);
if (!coresight_claim_device_unlocked(drvdata->base)) {
writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER0);
writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER1);
coresight_disclaim_device_unlocked(drvdata->base);
}
CS_LOCK(drvdata->base);
}
static int replicator_enable(struct coresight_device *csdev, int inport,
int outport)
{
int rc = 0;
u32 reg;
struct replicator_state *drvdata = dev_get_drvdata(csdev->dev.parent);
switch (outport) {
case 0:
reg = REPLICATOR_IDFILTER0;
break;
case 1:
reg = REPLICATOR_IDFILTER1;
break;
default:
WARN_ON(1);
return -EINVAL;
}
CS_UNLOCK(drvdata->base);
if ((readl_relaxed(drvdata->base + REPLICATOR_IDFILTER0) == 0xff) &&
(readl_relaxed(drvdata->base + REPLICATOR_IDFILTER1) == 0xff))
rc = coresight_claim_device_unlocked(drvdata->base);
/* Ensure that the outport is enabled. */
if (!rc) {
writel_relaxed(0x00, drvdata->base + reg);
dev_dbg(drvdata->dev, "REPLICATOR enabled\n");
}
CS_LOCK(drvdata->base);
return rc;
}
static void replicator_disable(struct coresight_device *csdev, int inport,
int outport)
{
u32 reg;
struct replicator_state *drvdata = dev_get_drvdata(csdev->dev.parent);
switch (outport) {
case 0:
reg = REPLICATOR_IDFILTER0;
break;
case 1:
reg = REPLICATOR_IDFILTER1;
break;
default:
WARN_ON(1);
return;
}
CS_UNLOCK(drvdata->base);
/* disable the flow of ATB data through port */
writel_relaxed(0xff, drvdata->base + reg);
if ((readl_relaxed(drvdata->base + REPLICATOR_IDFILTER0) == 0xff) &&
(readl_relaxed(drvdata->base + REPLICATOR_IDFILTER1) == 0xff))
coresight_disclaim_device_unlocked(drvdata->base);
CS_LOCK(drvdata->base);
dev_dbg(drvdata->dev, "REPLICATOR disabled\n");
}
static const struct coresight_ops_link replicator_link_ops = {
.enable = replicator_enable,
.disable = replicator_disable,
};
static const struct coresight_ops replicator_cs_ops = {
.link_ops = &replicator_link_ops,
};
#define coresight_replicator_reg(name, offset) \
coresight_simple_reg32(struct replicator_state, name, offset)
coresight_replicator_reg(idfilter0, REPLICATOR_IDFILTER0);
coresight_replicator_reg(idfilter1, REPLICATOR_IDFILTER1);
static struct attribute *replicator_mgmt_attrs[] = {
&dev_attr_idfilter0.attr,
&dev_attr_idfilter1.attr,
NULL,
};
static const struct attribute_group replicator_mgmt_group = {
.attrs = replicator_mgmt_attrs,
.name = "mgmt",
};
static const struct attribute_group *replicator_groups[] = {
&replicator_mgmt_group,
NULL,
};
static int replicator_probe(struct amba_device *adev, const struct amba_id *id)
{
int ret;
struct device *dev = &adev->dev;
struct resource *res = &adev->res;
struct coresight_platform_data *pdata = NULL;
struct replicator_state *drvdata;
struct coresight_desc desc = { 0 };
struct device_node *np = adev->dev.of_node;
void __iomem *base;
if (np) {
pdata = of_get_coresight_platform_data(dev, np);
if (IS_ERR(pdata))
return PTR_ERR(pdata);
adev->dev.platform_data = pdata;
}
drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
if (!drvdata)
return -ENOMEM;
drvdata->dev = &adev->dev;
drvdata->atclk = devm_clk_get(&adev->dev, "atclk"); /* optional */
if (!IS_ERR(drvdata->atclk)) {
ret = clk_prepare_enable(drvdata->atclk);
if (ret)
return ret;
}
/* Validity for the resource is already checked by the AMBA core */
base = devm_ioremap_resource(dev, res);
if (IS_ERR(base))
return PTR_ERR(base);
drvdata->base = base;
dev_set_drvdata(dev, drvdata);
pm_runtime_put(&adev->dev);
desc.type = CORESIGHT_DEV_TYPE_LINK;
desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_SPLIT;
desc.ops = &replicator_cs_ops;
desc.pdata = adev->dev.platform_data;
desc.dev = &adev->dev;
desc.groups = replicator_groups;
drvdata->csdev = coresight_register(&desc);
if (!IS_ERR(drvdata->csdev)) {
replicator_reset(drvdata);
return 0;
}
return PTR_ERR(drvdata->csdev);
}
#ifdef CONFIG_PM
static int replicator_runtime_suspend(struct device *dev)
{
struct replicator_state *drvdata = dev_get_drvdata(dev);
if (drvdata && !IS_ERR(drvdata->atclk))
clk_disable_unprepare(drvdata->atclk);
return 0;
}
static int replicator_runtime_resume(struct device *dev)
{
struct replicator_state *drvdata = dev_get_drvdata(dev);
if (drvdata && !IS_ERR(drvdata->atclk))
clk_prepare_enable(drvdata->atclk);
return 0;
}
#endif
static const struct dev_pm_ops replicator_dev_pm_ops = {
SET_RUNTIME_PM_OPS(replicator_runtime_suspend,
replicator_runtime_resume,
NULL)
};
static const struct amba_id replicator_ids[] = {
{
.id = 0x000bb909,
.mask = 0x000fffff,
},
{
/* Coresight SoC-600 */
.id = 0x000bb9ec,
.mask = 0x000fffff,
},
{ 0, 0 },
};
static struct amba_driver replicator_driver = {
.drv = {
.name = "coresight-dynamic-replicator",
.pm = &replicator_dev_pm_ops,
.suppress_bind_attrs = true,
},
.probe = replicator_probe,
.id_table = replicator_ids,
};
builtin_amba_driver(replicator_driver);
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
* Description: CoreSight Embedded Trace Buffer driver * Description: CoreSight Embedded Trace Buffer driver
*/ */
#include <linux/atomic.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/types.h> #include <linux/types.h>
...@@ -71,6 +72,8 @@ ...@@ -71,6 +72,8 @@
* @miscdev: specifics to handle "/dev/xyz.etb" entry. * @miscdev: specifics to handle "/dev/xyz.etb" entry.
* @spinlock: only one at a time pls. * @spinlock: only one at a time pls.
* @reading: synchronise user space access to etb buffer. * @reading: synchronise user space access to etb buffer.
* @pid: Process ID of the process being monitored by the session
* that is using this component.
* @buf: area of memory where ETB buffer content gets sent. * @buf: area of memory where ETB buffer content gets sent.
* @mode: this ETB is being used. * @mode: this ETB is being used.
* @buffer_depth: size of @buf. * @buffer_depth: size of @buf.
...@@ -84,6 +87,7 @@ struct etb_drvdata { ...@@ -84,6 +87,7 @@ struct etb_drvdata {
struct miscdevice miscdev; struct miscdevice miscdev;
spinlock_t spinlock; spinlock_t spinlock;
local_t reading; local_t reading;
pid_t pid;
u8 *buf; u8 *buf;
u32 mode; u32 mode;
u32 buffer_depth; u32 buffer_depth;
...@@ -93,17 +97,9 @@ struct etb_drvdata { ...@@ -93,17 +97,9 @@ struct etb_drvdata {
static int etb_set_buffer(struct coresight_device *csdev, static int etb_set_buffer(struct coresight_device *csdev,
struct perf_output_handle *handle); struct perf_output_handle *handle);
static unsigned int etb_get_buffer_depth(struct etb_drvdata *drvdata) static inline unsigned int etb_get_buffer_depth(struct etb_drvdata *drvdata)
{ {
u32 depth = 0; return readl_relaxed(drvdata->base + ETB_RAM_DEPTH_REG);
pm_runtime_get_sync(drvdata->dev);
/* RO registers don't need locking */
depth = readl_relaxed(drvdata->base + ETB_RAM_DEPTH_REG);
pm_runtime_put(drvdata->dev);
return depth;
} }
static void __etb_enable_hw(struct etb_drvdata *drvdata) static void __etb_enable_hw(struct etb_drvdata *drvdata)
...@@ -159,14 +155,15 @@ static int etb_enable_sysfs(struct coresight_device *csdev) ...@@ -159,14 +155,15 @@ static int etb_enable_sysfs(struct coresight_device *csdev)
goto out; goto out;
} }
/* Nothing to do, the tracer is already enabled. */ if (drvdata->mode == CS_MODE_DISABLED) {
if (drvdata->mode == CS_MODE_SYSFS) ret = etb_enable_hw(drvdata);
goto out; if (ret)
goto out;
ret = etb_enable_hw(drvdata);
if (!ret)
drvdata->mode = CS_MODE_SYSFS; drvdata->mode = CS_MODE_SYSFS;
}
atomic_inc(csdev->refcnt);
out: out:
spin_unlock_irqrestore(&drvdata->spinlock, flags); spin_unlock_irqrestore(&drvdata->spinlock, flags);
return ret; return ret;
...@@ -175,29 +172,52 @@ static int etb_enable_sysfs(struct coresight_device *csdev) ...@@ -175,29 +172,52 @@ static int etb_enable_sysfs(struct coresight_device *csdev)
static int etb_enable_perf(struct coresight_device *csdev, void *data) static int etb_enable_perf(struct coresight_device *csdev, void *data)
{ {
int ret = 0; int ret = 0;
pid_t pid;
unsigned long flags; unsigned long flags;
struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
struct perf_output_handle *handle = data;
spin_lock_irqsave(&drvdata->spinlock, flags); spin_lock_irqsave(&drvdata->spinlock, flags);
/* No need to continue if the component is already in use. */ /* No need to continue if the component is already in used by sysFS. */
if (drvdata->mode != CS_MODE_DISABLED) { if (drvdata->mode == CS_MODE_SYSFS) {
ret = -EBUSY;
goto out;
}
/* Get a handle on the pid of the process to monitor */
pid = task_pid_nr(handle->event->owner);
if (drvdata->pid != -1 && drvdata->pid != pid) {
ret = -EBUSY; ret = -EBUSY;
goto out; goto out;
} }
/*
* No HW configuration is needed if the sink is already in
* use for this session.
*/
if (drvdata->pid == pid) {
atomic_inc(csdev->refcnt);
goto out;
}
/* /*
* We don't have an internal state to clean up if we fail to setup * We don't have an internal state to clean up if we fail to setup
* the perf buffer. So we can perform the step before we turn the * the perf buffer. So we can perform the step before we turn the
* ETB on and leave without cleaning up. * ETB on and leave without cleaning up.
*/ */
ret = etb_set_buffer(csdev, (struct perf_output_handle *)data); ret = etb_set_buffer(csdev, handle);
if (ret) if (ret)
goto out; goto out;
ret = etb_enable_hw(drvdata); ret = etb_enable_hw(drvdata);
if (!ret) if (!ret) {
/* Associate with monitored process. */
drvdata->pid = pid;
drvdata->mode = CS_MODE_PERF; drvdata->mode = CS_MODE_PERF;
atomic_inc(csdev->refcnt);
}
out: out:
spin_unlock_irqrestore(&drvdata->spinlock, flags); spin_unlock_irqrestore(&drvdata->spinlock, flags);
...@@ -325,27 +345,35 @@ static void etb_disable_hw(struct etb_drvdata *drvdata) ...@@ -325,27 +345,35 @@ static void etb_disable_hw(struct etb_drvdata *drvdata)
coresight_disclaim_device(drvdata->base); coresight_disclaim_device(drvdata->base);
} }
static void etb_disable(struct coresight_device *csdev) static int etb_disable(struct coresight_device *csdev)
{ {
struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&drvdata->spinlock, flags); spin_lock_irqsave(&drvdata->spinlock, flags);
/* Disable the ETB only if it needs to */ if (atomic_dec_return(csdev->refcnt)) {
if (drvdata->mode != CS_MODE_DISABLED) { spin_unlock_irqrestore(&drvdata->spinlock, flags);
etb_disable_hw(drvdata); return -EBUSY;
drvdata->mode = CS_MODE_DISABLED;
} }
/* Complain if we (somehow) got out of sync */
WARN_ON_ONCE(drvdata->mode == CS_MODE_DISABLED);
etb_disable_hw(drvdata);
/* Dissociate from monitored process. */
drvdata->pid = -1;
drvdata->mode = CS_MODE_DISABLED;
spin_unlock_irqrestore(&drvdata->spinlock, flags); spin_unlock_irqrestore(&drvdata->spinlock, flags);
dev_dbg(drvdata->dev, "ETB disabled\n"); dev_dbg(drvdata->dev, "ETB disabled\n");
return 0;
} }
static void *etb_alloc_buffer(struct coresight_device *csdev, int cpu, static void *etb_alloc_buffer(struct coresight_device *csdev,
void **pages, int nr_pages, bool overwrite) struct perf_event *event, void **pages,
int nr_pages, bool overwrite)
{ {
int node; int node, cpu = event->cpu;
struct cs_buffers *buf; struct cs_buffers *buf;
if (cpu == -1) if (cpu == -1)
...@@ -404,7 +432,7 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev, ...@@ -404,7 +432,7 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev,
const u32 *barrier; const u32 *barrier;
u32 read_ptr, write_ptr, capacity; u32 read_ptr, write_ptr, capacity;
u32 status, read_data; u32 status, read_data;
unsigned long offset, to_read; unsigned long offset, to_read = 0, flags;
struct cs_buffers *buf = sink_config; struct cs_buffers *buf = sink_config;
struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
...@@ -413,6 +441,12 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev, ...@@ -413,6 +441,12 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev,
capacity = drvdata->buffer_depth * ETB_FRAME_SIZE_WORDS; capacity = drvdata->buffer_depth * ETB_FRAME_SIZE_WORDS;
spin_lock_irqsave(&drvdata->spinlock, flags);
/* Don't do anything if another tracer is using this sink */
if (atomic_read(csdev->refcnt) != 1)
goto out;
__etb_disable_hw(drvdata); __etb_disable_hw(drvdata);
CS_UNLOCK(drvdata->base); CS_UNLOCK(drvdata->base);
...@@ -523,6 +557,8 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev, ...@@ -523,6 +557,8 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev,
} }
__etb_enable_hw(drvdata); __etb_enable_hw(drvdata);
CS_LOCK(drvdata->base); CS_LOCK(drvdata->base);
out:
spin_unlock_irqrestore(&drvdata->spinlock, flags);
return to_read; return to_read;
} }
...@@ -720,7 +756,6 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -720,7 +756,6 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id)
spin_lock_init(&drvdata->spinlock); spin_lock_init(&drvdata->spinlock);
drvdata->buffer_depth = etb_get_buffer_depth(drvdata); drvdata->buffer_depth = etb_get_buffer_depth(drvdata);
pm_runtime_put(&adev->dev);
if (drvdata->buffer_depth & 0x80000000) if (drvdata->buffer_depth & 0x80000000)
return -EINVAL; return -EINVAL;
...@@ -730,6 +765,9 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -730,6 +765,9 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id)
if (!drvdata->buf) if (!drvdata->buf)
return -ENOMEM; return -ENOMEM;
/* This device is not associated with a session */
drvdata->pid = -1;
desc.type = CORESIGHT_DEV_TYPE_SINK; desc.type = CORESIGHT_DEV_TYPE_SINK;
desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER; desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER;
desc.ops = &etb_cs_ops; desc.ops = &etb_cs_ops;
...@@ -747,6 +785,7 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -747,6 +785,7 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id)
if (ret) if (ret)
goto err_misc_register; goto err_misc_register;
pm_runtime_put(&adev->dev);
return 0; return 0;
err_misc_register: err_misc_register:
......
...@@ -29,6 +29,7 @@ static DEFINE_PER_CPU(struct coresight_device *, csdev_src); ...@@ -29,6 +29,7 @@ static DEFINE_PER_CPU(struct coresight_device *, csdev_src);
/* ETMv3.5/PTM's ETMCR is 'config' */ /* ETMv3.5/PTM's ETMCR is 'config' */
PMU_FORMAT_ATTR(cycacc, "config:" __stringify(ETM_OPT_CYCACC)); PMU_FORMAT_ATTR(cycacc, "config:" __stringify(ETM_OPT_CYCACC));
PMU_FORMAT_ATTR(contextid, "config:" __stringify(ETM_OPT_CTXTID));
PMU_FORMAT_ATTR(timestamp, "config:" __stringify(ETM_OPT_TS)); PMU_FORMAT_ATTR(timestamp, "config:" __stringify(ETM_OPT_TS));
PMU_FORMAT_ATTR(retstack, "config:" __stringify(ETM_OPT_RETSTK)); PMU_FORMAT_ATTR(retstack, "config:" __stringify(ETM_OPT_RETSTK));
/* Sink ID - same for all ETMs */ /* Sink ID - same for all ETMs */
...@@ -36,6 +37,7 @@ PMU_FORMAT_ATTR(sinkid, "config2:0-31"); ...@@ -36,6 +37,7 @@ PMU_FORMAT_ATTR(sinkid, "config2:0-31");
static struct attribute *etm_config_formats_attr[] = { static struct attribute *etm_config_formats_attr[] = {
&format_attr_cycacc.attr, &format_attr_cycacc.attr,
&format_attr_contextid.attr,
&format_attr_timestamp.attr, &format_attr_timestamp.attr,
&format_attr_retstack.attr, &format_attr_retstack.attr,
&format_attr_sinkid.attr, &format_attr_sinkid.attr,
...@@ -118,23 +120,34 @@ static int etm_event_init(struct perf_event *event) ...@@ -118,23 +120,34 @@ static int etm_event_init(struct perf_event *event)
return ret; return ret;
} }
static void free_sink_buffer(struct etm_event_data *event_data)
{
int cpu;
cpumask_t *mask = &event_data->mask;
struct coresight_device *sink;
if (WARN_ON(cpumask_empty(mask)))
return;
if (!event_data->snk_config)
return;
cpu = cpumask_first(mask);
sink = coresight_get_sink(etm_event_cpu_path(event_data, cpu));
sink_ops(sink)->free_buffer(event_data->snk_config);
}
static void free_event_data(struct work_struct *work) static void free_event_data(struct work_struct *work)
{ {
int cpu; int cpu;
cpumask_t *mask; cpumask_t *mask;
struct etm_event_data *event_data; struct etm_event_data *event_data;
struct coresight_device *sink;
event_data = container_of(work, struct etm_event_data, work); event_data = container_of(work, struct etm_event_data, work);
mask = &event_data->mask; mask = &event_data->mask;
/* Free the sink buffers, if there are any */ /* Free the sink buffers, if there are any */
if (event_data->snk_config && !WARN_ON(cpumask_empty(mask))) { free_sink_buffer(event_data);
cpu = cpumask_first(mask);
sink = coresight_get_sink(etm_event_cpu_path(event_data, cpu));
if (sink_ops(sink)->free_buffer)
sink_ops(sink)->free_buffer(event_data->snk_config);
}
for_each_cpu(cpu, mask) { for_each_cpu(cpu, mask) {
struct list_head **ppath; struct list_head **ppath;
...@@ -213,7 +226,7 @@ static void *etm_setup_aux(struct perf_event *event, void **pages, ...@@ -213,7 +226,7 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
sink = coresight_get_enabled_sink(true); sink = coresight_get_enabled_sink(true);
} }
if (!sink || !sink_ops(sink)->alloc_buffer) if (!sink)
goto err; goto err;
mask = &event_data->mask; mask = &event_data->mask;
...@@ -259,9 +272,12 @@ static void *etm_setup_aux(struct perf_event *event, void **pages, ...@@ -259,9 +272,12 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
if (cpu >= nr_cpu_ids) if (cpu >= nr_cpu_ids)
goto err; goto err;
if (!sink_ops(sink)->alloc_buffer || !sink_ops(sink)->free_buffer)
goto err;
/* Allocate the sink buffer for this session */ /* Allocate the sink buffer for this session */
event_data->snk_config = event_data->snk_config =
sink_ops(sink)->alloc_buffer(sink, cpu, pages, sink_ops(sink)->alloc_buffer(sink, event, pages,
nr_pages, overwrite); nr_pages, overwrite);
if (!event_data->snk_config) if (!event_data->snk_config)
goto err; goto err;
...@@ -566,7 +582,8 @@ static int __init etm_perf_init(void) ...@@ -566,7 +582,8 @@ static int __init etm_perf_init(void)
{ {
int ret; int ret;
etm_pmu.capabilities = PERF_PMU_CAP_EXCLUSIVE; etm_pmu.capabilities = (PERF_PMU_CAP_EXCLUSIVE |
PERF_PMU_CAP_ITRACE);
etm_pmu.attr_groups = etm_pmu_attr_groups; etm_pmu.attr_groups = etm_pmu_attr_groups;
etm_pmu.task_ctx_nr = perf_sw_context; etm_pmu.task_ctx_nr = perf_sw_context;
......
...@@ -138,8 +138,11 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata) ...@@ -138,8 +138,11 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
drvdata->base + TRCCNTVRn(i)); drvdata->base + TRCCNTVRn(i));
} }
/* Resource selector pair 0 is always implemented and reserved */ /*
for (i = 0; i < drvdata->nr_resource * 2; i++) * Resource selector pair 0 is always implemented and reserved. As
* such start at 2.
*/
for (i = 2; i < drvdata->nr_resource * 2; i++)
writel_relaxed(config->res_ctrl[i], writel_relaxed(config->res_ctrl[i],
drvdata->base + TRCRSCTLRn(i)); drvdata->base + TRCRSCTLRn(i));
...@@ -201,6 +204,91 @@ static void etm4_enable_hw_smp_call(void *info) ...@@ -201,6 +204,91 @@ static void etm4_enable_hw_smp_call(void *info)
arg->rc = etm4_enable_hw(arg->drvdata); arg->rc = etm4_enable_hw(arg->drvdata);
} }
/*
* The goal of function etm4_config_timestamp_event() is to configure a
* counter that will tell the tracer to emit a timestamp packet when it
* reaches zero. This is done in order to get a more fine grained idea
* of when instructions are executed so that they can be correlated
* with execution on other CPUs.
*
* To do this the counter itself is configured to self reload and
* TRCRSCTLR1 (always true) used to get the counter to decrement. From
* there a resource selector is configured with the counter and the
* timestamp control register to use the resource selector to trigger the
* event that will insert a timestamp packet in the stream.
*/
static int etm4_config_timestamp_event(struct etmv4_drvdata *drvdata)
{
int ctridx, ret = -EINVAL;
int counter, rselector;
u32 val = 0;
struct etmv4_config *config = &drvdata->config;
/* No point in trying if we don't have at least one counter */
if (!drvdata->nr_cntr)
goto out;
/* Find a counter that hasn't been initialised */
for (ctridx = 0; ctridx < drvdata->nr_cntr; ctridx++)
if (config->cntr_val[ctridx] == 0)
break;
/* All the counters have been configured already, bail out */
if (ctridx == drvdata->nr_cntr) {
pr_debug("%s: no available counter found\n", __func__);
ret = -ENOSPC;
goto out;
}
/*
* Searching for an available resource selector to use, starting at
* '2' since every implementation has at least 2 resource selector.
* ETMIDR4 gives the number of resource selector _pairs_,
* hence multiply by 2.
*/
for (rselector = 2; rselector < drvdata->nr_resource * 2; rselector++)
if (!config->res_ctrl[rselector])
break;
if (rselector == drvdata->nr_resource * 2) {
pr_debug("%s: no available resource selector found\n",
__func__);
ret = -ENOSPC;
goto out;
}
/* Remember what counter we used */
counter = 1 << ctridx;
/*
* Initialise original and reload counter value to the smallest
* possible value in order to get as much precision as we can.
*/
config->cntr_val[ctridx] = 1;
config->cntrldvr[ctridx] = 1;
/* Set the trace counter control register */
val = 0x1 << 16 | /* Bit 16, reload counter automatically */
0x0 << 7 | /* Select single resource selector */
0x1; /* Resource selector 1, i.e always true */
config->cntr_ctrl[ctridx] = val;
val = 0x2 << 16 | /* Group 0b0010 - Counter and sequencers */
counter << 0; /* Counter to use */
config->res_ctrl[rselector] = val;
val = 0x0 << 7 | /* Select single resource selector */
rselector; /* Resource selector */
config->ts_ctrl = val;
ret = 0;
out:
return ret;
}
static int etm4_parse_event_config(struct etmv4_drvdata *drvdata, static int etm4_parse_event_config(struct etmv4_drvdata *drvdata,
struct perf_event *event) struct perf_event *event)
{ {
...@@ -236,9 +324,29 @@ static int etm4_parse_event_config(struct etmv4_drvdata *drvdata, ...@@ -236,9 +324,29 @@ static int etm4_parse_event_config(struct etmv4_drvdata *drvdata,
/* TRM: Must program this for cycacc to work */ /* TRM: Must program this for cycacc to work */
config->ccctlr = ETM_CYC_THRESHOLD_DEFAULT; config->ccctlr = ETM_CYC_THRESHOLD_DEFAULT;
} }
if (attr->config & BIT(ETM_OPT_TS)) if (attr->config & BIT(ETM_OPT_TS)) {
/*
* Configure timestamps to be emitted at regular intervals in
* order to correlate instructions executed on different CPUs
* (CPU-wide trace scenarios).
*/
ret = etm4_config_timestamp_event(drvdata);
/*
* No need to go further if timestamp intervals can't
* be configured.
*/
if (ret)
goto out;
/* bit[11], Global timestamp tracing bit */ /* bit[11], Global timestamp tracing bit */
config->cfg |= BIT(11); config->cfg |= BIT(11);
}
if (attr->config & BIT(ETM_OPT_CTXTID))
/* bit[6], Context ID tracing bit */
config->cfg |= BIT(ETM4_CFG_BIT_CTXTID);
/* return stack - enable if selected and supported */ /* return stack - enable if selected and supported */
if ((attr->config & BIT(ETM_OPT_RETSTK)) && drvdata->retstack) if ((attr->config & BIT(ETM_OPT_RETSTK)) && drvdata->retstack)
/* bit[12], Return stack enable bit */ /* bit[12], Return stack enable bit */
......
...@@ -12,6 +12,8 @@ ...@@ -12,6 +12,8 @@
#include <linux/err.h> #include <linux/err.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/coresight.h> #include <linux/coresight.h>
#include <linux/amba/bus.h> #include <linux/amba/bus.h>
...@@ -43,7 +45,7 @@ struct funnel_drvdata { ...@@ -43,7 +45,7 @@ struct funnel_drvdata {
unsigned long priority; unsigned long priority;
}; };
static int funnel_enable_hw(struct funnel_drvdata *drvdata, int port) static int dynamic_funnel_enable_hw(struct funnel_drvdata *drvdata, int port)
{ {
u32 functl; u32 functl;
int rc = 0; int rc = 0;
...@@ -71,17 +73,19 @@ static int funnel_enable_hw(struct funnel_drvdata *drvdata, int port) ...@@ -71,17 +73,19 @@ static int funnel_enable_hw(struct funnel_drvdata *drvdata, int port)
static int funnel_enable(struct coresight_device *csdev, int inport, static int funnel_enable(struct coresight_device *csdev, int inport,
int outport) int outport)
{ {
int rc; int rc = 0;
struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
rc = funnel_enable_hw(drvdata, inport); if (drvdata->base)
rc = dynamic_funnel_enable_hw(drvdata, inport);
if (!rc) if (!rc)
dev_dbg(drvdata->dev, "FUNNEL inport %d enabled\n", inport); dev_dbg(drvdata->dev, "FUNNEL inport %d enabled\n", inport);
return rc; return rc;
} }
static void funnel_disable_hw(struct funnel_drvdata *drvdata, int inport) static void dynamic_funnel_disable_hw(struct funnel_drvdata *drvdata,
int inport)
{ {
u32 functl; u32 functl;
...@@ -103,7 +107,8 @@ static void funnel_disable(struct coresight_device *csdev, int inport, ...@@ -103,7 +107,8 @@ static void funnel_disable(struct coresight_device *csdev, int inport,
{ {
struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
funnel_disable_hw(drvdata, inport); if (drvdata->base)
dynamic_funnel_disable_hw(drvdata, inport);
dev_dbg(drvdata->dev, "FUNNEL inport %d disabled\n", inport); dev_dbg(drvdata->dev, "FUNNEL inport %d disabled\n", inport);
} }
...@@ -177,54 +182,70 @@ static struct attribute *coresight_funnel_attrs[] = { ...@@ -177,54 +182,70 @@ static struct attribute *coresight_funnel_attrs[] = {
}; };
ATTRIBUTE_GROUPS(coresight_funnel); ATTRIBUTE_GROUPS(coresight_funnel);
static int funnel_probe(struct amba_device *adev, const struct amba_id *id) static int funnel_probe(struct device *dev, struct resource *res)
{ {
int ret; int ret;
void __iomem *base; void __iomem *base;
struct device *dev = &adev->dev;
struct coresight_platform_data *pdata = NULL; struct coresight_platform_data *pdata = NULL;
struct funnel_drvdata *drvdata; struct funnel_drvdata *drvdata;
struct resource *res = &adev->res;
struct coresight_desc desc = { 0 }; struct coresight_desc desc = { 0 };
struct device_node *np = adev->dev.of_node; struct device_node *np = dev->of_node;
if (np) { if (np) {
pdata = of_get_coresight_platform_data(dev, np); pdata = of_get_coresight_platform_data(dev, np);
if (IS_ERR(pdata)) if (IS_ERR(pdata))
return PTR_ERR(pdata); return PTR_ERR(pdata);
adev->dev.platform_data = pdata; dev->platform_data = pdata;
} }
if (of_device_is_compatible(np, "arm,coresight-funnel"))
pr_warn_once("Uses OBSOLETE CoreSight funnel binding\n");
drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
if (!drvdata) if (!drvdata)
return -ENOMEM; return -ENOMEM;
drvdata->dev = &adev->dev; drvdata->dev = dev;
drvdata->atclk = devm_clk_get(&adev->dev, "atclk"); /* optional */ drvdata->atclk = devm_clk_get(dev, "atclk"); /* optional */
if (!IS_ERR(drvdata->atclk)) { if (!IS_ERR(drvdata->atclk)) {
ret = clk_prepare_enable(drvdata->atclk); ret = clk_prepare_enable(drvdata->atclk);
if (ret) if (ret)
return ret; return ret;
} }
dev_set_drvdata(dev, drvdata);
/* Validity for the resource is already checked by the AMBA core */ /*
base = devm_ioremap_resource(dev, res); * Map the device base for dynamic-funnel, which has been
if (IS_ERR(base)) * validated by AMBA core.
return PTR_ERR(base); */
if (res) {
base = devm_ioremap_resource(dev, res);
if (IS_ERR(base)) {
ret = PTR_ERR(base);
goto out_disable_clk;
}
drvdata->base = base;
desc.groups = coresight_funnel_groups;
}
drvdata->base = base; dev_set_drvdata(dev, drvdata);
pm_runtime_put(&adev->dev);
desc.type = CORESIGHT_DEV_TYPE_LINK; desc.type = CORESIGHT_DEV_TYPE_LINK;
desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_MERG; desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_MERG;
desc.ops = &funnel_cs_ops; desc.ops = &funnel_cs_ops;
desc.pdata = pdata; desc.pdata = pdata;
desc.dev = dev; desc.dev = dev;
desc.groups = coresight_funnel_groups;
drvdata->csdev = coresight_register(&desc); drvdata->csdev = coresight_register(&desc);
if (IS_ERR(drvdata->csdev)) {
ret = PTR_ERR(drvdata->csdev);
goto out_disable_clk;
}
pm_runtime_put(dev);
return PTR_ERR_OR_ZERO(drvdata->csdev); out_disable_clk:
if (ret && !IS_ERR_OR_NULL(drvdata->atclk))
clk_disable_unprepare(drvdata->atclk);
return ret;
} }
#ifdef CONFIG_PM #ifdef CONFIG_PM
...@@ -253,7 +274,48 @@ static const struct dev_pm_ops funnel_dev_pm_ops = { ...@@ -253,7 +274,48 @@ static const struct dev_pm_ops funnel_dev_pm_ops = {
SET_RUNTIME_PM_OPS(funnel_runtime_suspend, funnel_runtime_resume, NULL) SET_RUNTIME_PM_OPS(funnel_runtime_suspend, funnel_runtime_resume, NULL)
}; };
static const struct amba_id funnel_ids[] = { static int static_funnel_probe(struct platform_device *pdev)
{
int ret;
pm_runtime_get_noresume(&pdev->dev);
pm_runtime_set_active(&pdev->dev);
pm_runtime_enable(&pdev->dev);
/* Static funnel do not have programming base */
ret = funnel_probe(&pdev->dev, NULL);
if (ret) {
pm_runtime_put_noidle(&pdev->dev);
pm_runtime_disable(&pdev->dev);
}
return ret;
}
static const struct of_device_id static_funnel_match[] = {
{.compatible = "arm,coresight-static-funnel"},
{}
};
static struct platform_driver static_funnel_driver = {
.probe = static_funnel_probe,
.driver = {
.name = "coresight-static-funnel",
.of_match_table = static_funnel_match,
.pm = &funnel_dev_pm_ops,
.suppress_bind_attrs = true,
},
};
builtin_platform_driver(static_funnel_driver);
static int dynamic_funnel_probe(struct amba_device *adev,
const struct amba_id *id)
{
return funnel_probe(&adev->dev, &adev->res);
}
static const struct amba_id dynamic_funnel_ids[] = {
{ {
.id = 0x000bb908, .id = 0x000bb908,
.mask = 0x000fffff, .mask = 0x000fffff,
...@@ -266,14 +328,14 @@ static const struct amba_id funnel_ids[] = { ...@@ -266,14 +328,14 @@ static const struct amba_id funnel_ids[] = {
{ 0, 0}, { 0, 0},
}; };
static struct amba_driver funnel_driver = { static struct amba_driver dynamic_funnel_driver = {
.drv = { .drv = {
.name = "coresight-funnel", .name = "coresight-dynamic-funnel",
.owner = THIS_MODULE, .owner = THIS_MODULE,
.pm = &funnel_dev_pm_ops, .pm = &funnel_dev_pm_ops,
.suppress_bind_attrs = true, .suppress_bind_attrs = true,
}, },
.probe = funnel_probe, .probe = dynamic_funnel_probe,
.id_table = funnel_ids, .id_table = dynamic_funnel_ids,
}; };
builtin_amba_driver(funnel_driver); builtin_amba_driver(dynamic_funnel_driver);
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
/* /*
* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. * Copyright (c) 2011-2015, The Linux Foundation. All rights reserved.
* *
* Description: CoreSight Replicator driver * Description: CoreSight Replicator driver
*/ */
#include <linux/amba/bus.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
...@@ -18,25 +19,117 @@ ...@@ -18,25 +19,117 @@
#include "coresight-priv.h" #include "coresight-priv.h"
#define REPLICATOR_IDFILTER0 0x000
#define REPLICATOR_IDFILTER1 0x004
/** /**
* struct replicator_drvdata - specifics associated to a replicator component * struct replicator_drvdata - specifics associated to a replicator component
* @base: memory mapped base address for this component. Also indicates
* whether this one is programmable or not.
* @dev: the device entity associated with this component * @dev: the device entity associated with this component
* @atclk: optional clock for the core parts of the replicator. * @atclk: optional clock for the core parts of the replicator.
* @csdev: component vitals needed by the framework * @csdev: component vitals needed by the framework
*/ */
struct replicator_drvdata { struct replicator_drvdata {
void __iomem *base;
struct device *dev; struct device *dev;
struct clk *atclk; struct clk *atclk;
struct coresight_device *csdev; struct coresight_device *csdev;
}; };
static void dynamic_replicator_reset(struct replicator_drvdata *drvdata)
{
CS_UNLOCK(drvdata->base);
if (!coresight_claim_device_unlocked(drvdata->base)) {
writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER0);
writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER1);
coresight_disclaim_device_unlocked(drvdata->base);
}
CS_LOCK(drvdata->base);
}
/*
* replicator_reset : Reset the replicator configuration to sane values.
*/
static inline void replicator_reset(struct replicator_drvdata *drvdata)
{
if (drvdata->base)
dynamic_replicator_reset(drvdata);
}
static int dynamic_replicator_enable(struct replicator_drvdata *drvdata,
int inport, int outport)
{
int rc = 0;
u32 reg;
switch (outport) {
case 0:
reg = REPLICATOR_IDFILTER0;
break;
case 1:
reg = REPLICATOR_IDFILTER1;
break;
default:
WARN_ON(1);
return -EINVAL;
}
CS_UNLOCK(drvdata->base);
if ((readl_relaxed(drvdata->base + REPLICATOR_IDFILTER0) == 0xff) &&
(readl_relaxed(drvdata->base + REPLICATOR_IDFILTER1) == 0xff))
rc = coresight_claim_device_unlocked(drvdata->base);
/* Ensure that the outport is enabled. */
if (!rc)
writel_relaxed(0x00, drvdata->base + reg);
CS_LOCK(drvdata->base);
return rc;
}
static int replicator_enable(struct coresight_device *csdev, int inport, static int replicator_enable(struct coresight_device *csdev, int inport,
int outport) int outport)
{ {
int rc = 0;
struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
dev_dbg(drvdata->dev, "REPLICATOR enabled\n"); if (drvdata->base)
return 0; rc = dynamic_replicator_enable(drvdata, inport, outport);
if (!rc)
dev_dbg(drvdata->dev, "REPLICATOR enabled\n");
return rc;
}
static void dynamic_replicator_disable(struct replicator_drvdata *drvdata,
int inport, int outport)
{
u32 reg;
switch (outport) {
case 0:
reg = REPLICATOR_IDFILTER0;
break;
case 1:
reg = REPLICATOR_IDFILTER1;
break;
default:
WARN_ON(1);
return;
}
CS_UNLOCK(drvdata->base);
/* disable the flow of ATB data through port */
writel_relaxed(0xff, drvdata->base + reg);
if ((readl_relaxed(drvdata->base + REPLICATOR_IDFILTER0) == 0xff) &&
(readl_relaxed(drvdata->base + REPLICATOR_IDFILTER1) == 0xff))
coresight_disclaim_device_unlocked(drvdata->base);
CS_LOCK(drvdata->base);
} }
static void replicator_disable(struct coresight_device *csdev, int inport, static void replicator_disable(struct coresight_device *csdev, int inport,
...@@ -44,6 +137,8 @@ static void replicator_disable(struct coresight_device *csdev, int inport, ...@@ -44,6 +137,8 @@ static void replicator_disable(struct coresight_device *csdev, int inport,
{ {
struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
if (drvdata->base)
dynamic_replicator_disable(drvdata, inport, outport);
dev_dbg(drvdata->dev, "REPLICATOR disabled\n"); dev_dbg(drvdata->dev, "REPLICATOR disabled\n");
} }
...@@ -56,58 +151,110 @@ static const struct coresight_ops replicator_cs_ops = { ...@@ -56,58 +151,110 @@ static const struct coresight_ops replicator_cs_ops = {
.link_ops = &replicator_link_ops, .link_ops = &replicator_link_ops,
}; };
static int replicator_probe(struct platform_device *pdev) #define coresight_replicator_reg(name, offset) \
coresight_simple_reg32(struct replicator_drvdata, name, offset)
coresight_replicator_reg(idfilter0, REPLICATOR_IDFILTER0);
coresight_replicator_reg(idfilter1, REPLICATOR_IDFILTER1);
static struct attribute *replicator_mgmt_attrs[] = {
&dev_attr_idfilter0.attr,
&dev_attr_idfilter1.attr,
NULL,
};
static const struct attribute_group replicator_mgmt_group = {
.attrs = replicator_mgmt_attrs,
.name = "mgmt",
};
static const struct attribute_group *replicator_groups[] = {
&replicator_mgmt_group,
NULL,
};
static int replicator_probe(struct device *dev, struct resource *res)
{ {
int ret; int ret = 0;
struct device *dev = &pdev->dev;
struct coresight_platform_data *pdata = NULL; struct coresight_platform_data *pdata = NULL;
struct replicator_drvdata *drvdata; struct replicator_drvdata *drvdata;
struct coresight_desc desc = { 0 }; struct coresight_desc desc = { 0 };
struct device_node *np = pdev->dev.of_node; struct device_node *np = dev->of_node;
void __iomem *base;
if (np) { if (np) {
pdata = of_get_coresight_platform_data(dev, np); pdata = of_get_coresight_platform_data(dev, np);
if (IS_ERR(pdata)) if (IS_ERR(pdata))
return PTR_ERR(pdata); return PTR_ERR(pdata);
pdev->dev.platform_data = pdata; dev->platform_data = pdata;
} }
if (of_device_is_compatible(np, "arm,coresight-replicator"))
pr_warn_once("Uses OBSOLETE CoreSight replicator binding\n");
drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
if (!drvdata) if (!drvdata)
return -ENOMEM; return -ENOMEM;
drvdata->dev = &pdev->dev; drvdata->dev = dev;
drvdata->atclk = devm_clk_get(&pdev->dev, "atclk"); /* optional */ drvdata->atclk = devm_clk_get(dev, "atclk"); /* optional */
if (!IS_ERR(drvdata->atclk)) { if (!IS_ERR(drvdata->atclk)) {
ret = clk_prepare_enable(drvdata->atclk); ret = clk_prepare_enable(drvdata->atclk);
if (ret) if (ret)
return ret; return ret;
} }
pm_runtime_get_noresume(&pdev->dev);
pm_runtime_set_active(&pdev->dev); /*
pm_runtime_enable(&pdev->dev); * Map the device base for dynamic-replicator, which has been
platform_set_drvdata(pdev, drvdata); * validated by AMBA core
*/
if (res) {
base = devm_ioremap_resource(dev, res);
if (IS_ERR(base)) {
ret = PTR_ERR(base);
goto out_disable_clk;
}
drvdata->base = base;
desc.groups = replicator_groups;
}
dev_set_drvdata(dev, drvdata);
desc.type = CORESIGHT_DEV_TYPE_LINK; desc.type = CORESIGHT_DEV_TYPE_LINK;
desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_SPLIT; desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_SPLIT;
desc.ops = &replicator_cs_ops; desc.ops = &replicator_cs_ops;
desc.pdata = pdev->dev.platform_data; desc.pdata = dev->platform_data;
desc.dev = &pdev->dev; desc.dev = dev;
drvdata->csdev = coresight_register(&desc); drvdata->csdev = coresight_register(&desc);
if (IS_ERR(drvdata->csdev)) { if (IS_ERR(drvdata->csdev)) {
ret = PTR_ERR(drvdata->csdev); ret = PTR_ERR(drvdata->csdev);
goto out_disable_pm; goto out_disable_clk;
} }
pm_runtime_put(&pdev->dev); replicator_reset(drvdata);
pm_runtime_put(dev);
return 0;
out_disable_pm: out_disable_clk:
if (!IS_ERR(drvdata->atclk)) if (ret && !IS_ERR_OR_NULL(drvdata->atclk))
clk_disable_unprepare(drvdata->atclk); clk_disable_unprepare(drvdata->atclk);
pm_runtime_put_noidle(&pdev->dev); return ret;
pm_runtime_disable(&pdev->dev); }
static int static_replicator_probe(struct platform_device *pdev)
{
int ret;
pm_runtime_get_noresume(&pdev->dev);
pm_runtime_set_active(&pdev->dev);
pm_runtime_enable(&pdev->dev);
/* Static replicators do not have programming base */
ret = replicator_probe(&pdev->dev, NULL);
if (ret) {
pm_runtime_put_noidle(&pdev->dev);
pm_runtime_disable(&pdev->dev);
}
return ret; return ret;
} }
...@@ -139,18 +286,49 @@ static const struct dev_pm_ops replicator_dev_pm_ops = { ...@@ -139,18 +286,49 @@ static const struct dev_pm_ops replicator_dev_pm_ops = {
replicator_runtime_resume, NULL) replicator_runtime_resume, NULL)
}; };
static const struct of_device_id replicator_match[] = { static const struct of_device_id static_replicator_match[] = {
{.compatible = "arm,coresight-replicator"}, {.compatible = "arm,coresight-replicator"},
{.compatible = "arm,coresight-static-replicator"},
{} {}
}; };
static struct platform_driver replicator_driver = { static struct platform_driver static_replicator_driver = {
.probe = replicator_probe, .probe = static_replicator_probe,
.driver = { .driver = {
.name = "coresight-replicator", .name = "coresight-static-replicator",
.of_match_table = replicator_match, .of_match_table = static_replicator_match,
.pm = &replicator_dev_pm_ops,
.suppress_bind_attrs = true,
},
};
builtin_platform_driver(static_replicator_driver);
static int dynamic_replicator_probe(struct amba_device *adev,
const struct amba_id *id)
{
return replicator_probe(&adev->dev, &adev->res);
}
static const struct amba_id dynamic_replicator_ids[] = {
{
.id = 0x000bb909,
.mask = 0x000fffff,
},
{
/* Coresight SoC-600 */
.id = 0x000bb9ec,
.mask = 0x000fffff,
},
{ 0, 0 },
};
static struct amba_driver dynamic_replicator_driver = {
.drv = {
.name = "coresight-dynamic-replicator",
.pm = &replicator_dev_pm_ops, .pm = &replicator_dev_pm_ops,
.suppress_bind_attrs = true, .suppress_bind_attrs = true,
}, },
.probe = dynamic_replicator_probe,
.id_table = dynamic_replicator_ids,
}; };
builtin_platform_driver(replicator_driver); builtin_amba_driver(dynamic_replicator_driver);
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
* Author: Mathieu Poirier <mathieu.poirier@linaro.org> * Author: Mathieu Poirier <mathieu.poirier@linaro.org>
*/ */
#include <linux/atomic.h>
#include <linux/circ_buf.h> #include <linux/circ_buf.h>
#include <linux/coresight.h> #include <linux/coresight.h>
#include <linux/perf_event.h> #include <linux/perf_event.h>
...@@ -180,8 +181,10 @@ static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev) ...@@ -180,8 +181,10 @@ static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev)
* sink is already enabled no memory is needed and the HW need not be * sink is already enabled no memory is needed and the HW need not be
* touched. * touched.
*/ */
if (drvdata->mode == CS_MODE_SYSFS) if (drvdata->mode == CS_MODE_SYSFS) {
atomic_inc(csdev->refcnt);
goto out; goto out;
}
/* /*
* If drvdata::buf isn't NULL, memory was allocated for a previous * If drvdata::buf isn't NULL, memory was allocated for a previous
...@@ -200,11 +203,13 @@ static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev) ...@@ -200,11 +203,13 @@ static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev)
} }
ret = tmc_etb_enable_hw(drvdata); ret = tmc_etb_enable_hw(drvdata);
if (!ret) if (!ret) {
drvdata->mode = CS_MODE_SYSFS; drvdata->mode = CS_MODE_SYSFS;
else atomic_inc(csdev->refcnt);
} else {
/* Free up the buffer if we failed to enable */ /* Free up the buffer if we failed to enable */
used = false; used = false;
}
out: out:
spin_unlock_irqrestore(&drvdata->spinlock, flags); spin_unlock_irqrestore(&drvdata->spinlock, flags);
...@@ -218,6 +223,7 @@ static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev) ...@@ -218,6 +223,7 @@ static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev)
static int tmc_enable_etf_sink_perf(struct coresight_device *csdev, void *data) static int tmc_enable_etf_sink_perf(struct coresight_device *csdev, void *data)
{ {
int ret = 0; int ret = 0;
pid_t pid;
unsigned long flags; unsigned long flags;
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
struct perf_output_handle *handle = data; struct perf_output_handle *handle = data;
...@@ -228,19 +234,42 @@ static int tmc_enable_etf_sink_perf(struct coresight_device *csdev, void *data) ...@@ -228,19 +234,42 @@ static int tmc_enable_etf_sink_perf(struct coresight_device *csdev, void *data)
if (drvdata->reading) if (drvdata->reading)
break; break;
/* /*
* In Perf mode there can be only one writer per sink. There * No need to continue if the ETB/ETF is already operated
* is also no need to continue if the ETB/ETF is already * from sysFS.
* operated from sysFS.
*/ */
if (drvdata->mode != CS_MODE_DISABLED) if (drvdata->mode == CS_MODE_SYSFS) {
ret = -EBUSY;
break;
}
/* Get a handle on the pid of the process to monitor */
pid = task_pid_nr(handle->event->owner);
if (drvdata->pid != -1 && drvdata->pid != pid) {
ret = -EBUSY;
break; break;
}
ret = tmc_set_etf_buffer(csdev, handle); ret = tmc_set_etf_buffer(csdev, handle);
if (ret) if (ret)
break; break;
/*
* No HW configuration is needed if the sink is already in
* use for this session.
*/
if (drvdata->pid == pid) {
atomic_inc(csdev->refcnt);
break;
}
ret = tmc_etb_enable_hw(drvdata); ret = tmc_etb_enable_hw(drvdata);
if (!ret) if (!ret) {
/* Associate with monitored process. */
drvdata->pid = pid;
drvdata->mode = CS_MODE_PERF; drvdata->mode = CS_MODE_PERF;
atomic_inc(csdev->refcnt);
}
} while (0); } while (0);
spin_unlock_irqrestore(&drvdata->spinlock, flags); spin_unlock_irqrestore(&drvdata->spinlock, flags);
...@@ -273,26 +302,34 @@ static int tmc_enable_etf_sink(struct coresight_device *csdev, ...@@ -273,26 +302,34 @@ static int tmc_enable_etf_sink(struct coresight_device *csdev,
return 0; return 0;
} }
static void tmc_disable_etf_sink(struct coresight_device *csdev) static int tmc_disable_etf_sink(struct coresight_device *csdev)
{ {
unsigned long flags; unsigned long flags;
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
spin_lock_irqsave(&drvdata->spinlock, flags); spin_lock_irqsave(&drvdata->spinlock, flags);
if (drvdata->reading) { if (drvdata->reading) {
spin_unlock_irqrestore(&drvdata->spinlock, flags); spin_unlock_irqrestore(&drvdata->spinlock, flags);
return; return -EBUSY;
} }
/* Disable the TMC only if it needs to */ if (atomic_dec_return(csdev->refcnt)) {
if (drvdata->mode != CS_MODE_DISABLED) { spin_unlock_irqrestore(&drvdata->spinlock, flags);
tmc_etb_disable_hw(drvdata); return -EBUSY;
drvdata->mode = CS_MODE_DISABLED;
} }
/* Complain if we (somehow) got out of sync */
WARN_ON_ONCE(drvdata->mode == CS_MODE_DISABLED);
tmc_etb_disable_hw(drvdata);
/* Dissociate from monitored process. */
drvdata->pid = -1;
drvdata->mode = CS_MODE_DISABLED;
spin_unlock_irqrestore(&drvdata->spinlock, flags); spin_unlock_irqrestore(&drvdata->spinlock, flags);
dev_dbg(drvdata->dev, "TMC-ETB/ETF disabled\n"); dev_dbg(drvdata->dev, "TMC-ETB/ETF disabled\n");
return 0;
} }
static int tmc_enable_etf_link(struct coresight_device *csdev, static int tmc_enable_etf_link(struct coresight_device *csdev,
...@@ -337,10 +374,11 @@ static void tmc_disable_etf_link(struct coresight_device *csdev, ...@@ -337,10 +374,11 @@ static void tmc_disable_etf_link(struct coresight_device *csdev,
dev_dbg(drvdata->dev, "TMC-ETF disabled\n"); dev_dbg(drvdata->dev, "TMC-ETF disabled\n");
} }
static void *tmc_alloc_etf_buffer(struct coresight_device *csdev, int cpu, static void *tmc_alloc_etf_buffer(struct coresight_device *csdev,
void **pages, int nr_pages, bool overwrite) struct perf_event *event, void **pages,
int nr_pages, bool overwrite)
{ {
int node; int node, cpu = event->cpu;
struct cs_buffers *buf; struct cs_buffers *buf;
if (cpu == -1) if (cpu == -1)
...@@ -400,7 +438,7 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev, ...@@ -400,7 +438,7 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev,
u32 *buf_ptr; u32 *buf_ptr;
u64 read_ptr, write_ptr; u64 read_ptr, write_ptr;
u32 status; u32 status;
unsigned long offset, to_read; unsigned long offset, to_read = 0, flags;
struct cs_buffers *buf = sink_config; struct cs_buffers *buf = sink_config;
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
...@@ -411,6 +449,12 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev, ...@@ -411,6 +449,12 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev,
if (WARN_ON_ONCE(drvdata->mode != CS_MODE_PERF)) if (WARN_ON_ONCE(drvdata->mode != CS_MODE_PERF))
return 0; return 0;
spin_lock_irqsave(&drvdata->spinlock, flags);
/* Don't do anything if another tracer is using this sink */
if (atomic_read(csdev->refcnt) != 1)
goto out;
CS_UNLOCK(drvdata->base); CS_UNLOCK(drvdata->base);
tmc_flush_and_stop(drvdata); tmc_flush_and_stop(drvdata);
...@@ -504,6 +548,8 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev, ...@@ -504,6 +548,8 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev,
to_read = buf->nr_pages << PAGE_SHIFT; to_read = buf->nr_pages << PAGE_SHIFT;
} }
CS_LOCK(drvdata->base); CS_LOCK(drvdata->base);
out:
spin_unlock_irqrestore(&drvdata->spinlock, flags);
return to_read; return to_read;
} }
......
...@@ -8,10 +8,12 @@ ...@@ -8,10 +8,12 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/idr.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/miscdevice.h> #include <linux/miscdevice.h>
#include <linux/mutex.h>
#include <linux/property.h> #include <linux/property.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/slab.h> #include <linux/slab.h>
...@@ -340,6 +342,8 @@ static inline bool tmc_etr_can_use_sg(struct tmc_drvdata *drvdata) ...@@ -340,6 +342,8 @@ static inline bool tmc_etr_can_use_sg(struct tmc_drvdata *drvdata)
static int tmc_etr_setup_caps(struct tmc_drvdata *drvdata, static int tmc_etr_setup_caps(struct tmc_drvdata *drvdata,
u32 devid, void *dev_caps) u32 devid, void *dev_caps)
{ {
int rc;
u32 dma_mask = 0; u32 dma_mask = 0;
/* Set the unadvertised capabilities */ /* Set the unadvertised capabilities */
...@@ -369,7 +373,10 @@ static int tmc_etr_setup_caps(struct tmc_drvdata *drvdata, ...@@ -369,7 +373,10 @@ static int tmc_etr_setup_caps(struct tmc_drvdata *drvdata,
dma_mask = 40; dma_mask = 40;
} }
return dma_set_mask_and_coherent(drvdata->dev, DMA_BIT_MASK(dma_mask)); rc = dma_set_mask_and_coherent(drvdata->dev, DMA_BIT_MASK(dma_mask));
if (rc)
dev_err(drvdata->dev, "Failed to setup DMA mask: %d\n", rc);
return rc;
} }
static int tmc_probe(struct amba_device *adev, const struct amba_id *id) static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
...@@ -415,6 +422,8 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -415,6 +422,8 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
devid = readl_relaxed(drvdata->base + CORESIGHT_DEVID); devid = readl_relaxed(drvdata->base + CORESIGHT_DEVID);
drvdata->config_type = BMVAL(devid, 6, 7); drvdata->config_type = BMVAL(devid, 6, 7);
drvdata->memwidth = tmc_get_memwidth(devid); drvdata->memwidth = tmc_get_memwidth(devid);
/* This device is not associated with a session */
drvdata->pid = -1;
if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) { if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
if (np) if (np)
...@@ -427,8 +436,6 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -427,8 +436,6 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
drvdata->size = readl_relaxed(drvdata->base + TMC_RSZ) * 4; drvdata->size = readl_relaxed(drvdata->base + TMC_RSZ) * 4;
} }
pm_runtime_put(&adev->dev);
desc.pdata = pdata; desc.pdata = pdata;
desc.dev = dev; desc.dev = dev;
desc.groups = coresight_tmc_groups; desc.groups = coresight_tmc_groups;
...@@ -447,6 +454,8 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -447,6 +454,8 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
coresight_get_uci_data(id)); coresight_get_uci_data(id));
if (ret) if (ret)
goto out; goto out;
idr_init(&drvdata->idr);
mutex_init(&drvdata->idr_mutex);
break; break;
case TMC_CONFIG_TYPE_ETF: case TMC_CONFIG_TYPE_ETF:
desc.type = CORESIGHT_DEV_TYPE_LINKSINK; desc.type = CORESIGHT_DEV_TYPE_LINKSINK;
...@@ -471,6 +480,8 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -471,6 +480,8 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
ret = misc_register(&drvdata->miscdev); ret = misc_register(&drvdata->miscdev);
if (ret) if (ret)
coresight_unregister(drvdata->csdev); coresight_unregister(drvdata->csdev);
else
pm_runtime_put(&adev->dev);
out: out:
return ret; return ret;
} }
......
...@@ -8,7 +8,10 @@ ...@@ -8,7 +8,10 @@
#define _CORESIGHT_TMC_H #define _CORESIGHT_TMC_H
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/idr.h>
#include <linux/miscdevice.h> #include <linux/miscdevice.h>
#include <linux/mutex.h>
#include <linux/refcount.h>
#define TMC_RSZ 0x004 #define TMC_RSZ 0x004
#define TMC_STS 0x00c #define TMC_STS 0x00c
...@@ -133,6 +136,7 @@ struct etr_buf_operations; ...@@ -133,6 +136,7 @@ struct etr_buf_operations;
/** /**
* struct etr_buf - Details of the buffer used by ETR * struct etr_buf - Details of the buffer used by ETR
* refcount ; Number of sources currently using this etr_buf.
* @mode : Mode of the ETR buffer, contiguous, Scatter Gather etc. * @mode : Mode of the ETR buffer, contiguous, Scatter Gather etc.
* @full : Trace data overflow * @full : Trace data overflow
* @size : Size of the buffer. * @size : Size of the buffer.
...@@ -143,6 +147,7 @@ struct etr_buf_operations; ...@@ -143,6 +147,7 @@ struct etr_buf_operations;
* @private : Backend specific information for the buf * @private : Backend specific information for the buf
*/ */
struct etr_buf { struct etr_buf {
refcount_t refcount;
enum etr_mode mode; enum etr_mode mode;
bool full; bool full;
ssize_t size; ssize_t size;
...@@ -160,6 +165,8 @@ struct etr_buf { ...@@ -160,6 +165,8 @@ struct etr_buf {
* @csdev: component vitals needed by the framework. * @csdev: component vitals needed by the framework.
* @miscdev: specifics to handle "/dev/xyz.tmc" entry. * @miscdev: specifics to handle "/dev/xyz.tmc" entry.
* @spinlock: only one at a time pls. * @spinlock: only one at a time pls.
* @pid: Process ID of the process being monitored by the session
* that is using this component.
* @buf: Snapshot of the trace data for ETF/ETB. * @buf: Snapshot of the trace data for ETF/ETB.
* @etr_buf: details of buffer used in TMC-ETR * @etr_buf: details of buffer used in TMC-ETR
* @len: size of the available trace for ETF/ETB. * @len: size of the available trace for ETF/ETB.
...@@ -170,6 +177,8 @@ struct etr_buf { ...@@ -170,6 +177,8 @@ struct etr_buf {
* @trigger_cntr: amount of words to store after a trigger. * @trigger_cntr: amount of words to store after a trigger.
* @etr_caps: Bitmask of capabilities of the TMC ETR, inferred from the * @etr_caps: Bitmask of capabilities of the TMC ETR, inferred from the
* device configuration register (DEVID) * device configuration register (DEVID)
* @idr: Holds etr_bufs allocated for this ETR.
* @idr_mutex: Access serialisation for idr.
* @perf_data: PERF buffer for ETR. * @perf_data: PERF buffer for ETR.
* @sysfs_data: SYSFS buffer for ETR. * @sysfs_data: SYSFS buffer for ETR.
*/ */
...@@ -179,6 +188,7 @@ struct tmc_drvdata { ...@@ -179,6 +188,7 @@ struct tmc_drvdata {
struct coresight_device *csdev; struct coresight_device *csdev;
struct miscdevice miscdev; struct miscdevice miscdev;
spinlock_t spinlock; spinlock_t spinlock;
pid_t pid;
bool reading; bool reading;
union { union {
char *buf; /* TMC ETB */ char *buf; /* TMC ETB */
...@@ -191,6 +201,8 @@ struct tmc_drvdata { ...@@ -191,6 +201,8 @@ struct tmc_drvdata {
enum tmc_mem_intf_width memwidth; enum tmc_mem_intf_width memwidth;
u32 trigger_cntr; u32 trigger_cntr;
u32 etr_caps; u32 etr_caps;
struct idr idr;
struct mutex idr_mutex;
struct etr_buf *sysfs_buf; struct etr_buf *sysfs_buf;
void *perf_data; void *perf_data;
}; };
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
* Description: CoreSight Trace Port Interface Unit driver * Description: CoreSight Trace Port Interface Unit driver
*/ */
#include <linux/atomic.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/device.h> #include <linux/device.h>
...@@ -73,7 +74,7 @@ static int tpiu_enable(struct coresight_device *csdev, u32 mode, void *__unused) ...@@ -73,7 +74,7 @@ static int tpiu_enable(struct coresight_device *csdev, u32 mode, void *__unused)
struct tpiu_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); struct tpiu_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
tpiu_enable_hw(drvdata); tpiu_enable_hw(drvdata);
atomic_inc(csdev->refcnt);
dev_dbg(drvdata->dev, "TPIU enabled\n"); dev_dbg(drvdata->dev, "TPIU enabled\n");
return 0; return 0;
} }
...@@ -94,13 +95,17 @@ static void tpiu_disable_hw(struct tpiu_drvdata *drvdata) ...@@ -94,13 +95,17 @@ static void tpiu_disable_hw(struct tpiu_drvdata *drvdata)
CS_LOCK(drvdata->base); CS_LOCK(drvdata->base);
} }
static void tpiu_disable(struct coresight_device *csdev) static int tpiu_disable(struct coresight_device *csdev)
{ {
struct tpiu_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); struct tpiu_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
if (atomic_dec_return(csdev->refcnt))
return -EBUSY;
tpiu_disable_hw(drvdata); tpiu_disable_hw(drvdata);
dev_dbg(drvdata->dev, "TPIU disabled\n"); dev_dbg(drvdata->dev, "TPIU disabled\n");
return 0;
} }
static const struct coresight_ops_sink tpiu_sink_ops = { static const struct coresight_ops_sink tpiu_sink_ops = {
...@@ -153,8 +158,6 @@ static int tpiu_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -153,8 +158,6 @@ static int tpiu_probe(struct amba_device *adev, const struct amba_id *id)
/* Disable tpiu to support older devices */ /* Disable tpiu to support older devices */
tpiu_disable_hw(drvdata); tpiu_disable_hw(drvdata);
pm_runtime_put(&adev->dev);
desc.type = CORESIGHT_DEV_TYPE_SINK; desc.type = CORESIGHT_DEV_TYPE_SINK;
desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_PORT; desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_PORT;
desc.ops = &tpiu_cs_ops; desc.ops = &tpiu_cs_ops;
...@@ -162,7 +165,12 @@ static int tpiu_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -162,7 +165,12 @@ static int tpiu_probe(struct amba_device *adev, const struct amba_id *id)
desc.dev = dev; desc.dev = dev;
drvdata->csdev = coresight_register(&desc); drvdata->csdev = coresight_register(&desc);
return PTR_ERR_OR_ZERO(drvdata->csdev); if (!IS_ERR(drvdata->csdev)) {
pm_runtime_put(&adev->dev);
return 0;
}
return PTR_ERR(drvdata->csdev);
} }
#ifdef CONFIG_PM #ifdef CONFIG_PM
......
...@@ -225,26 +225,28 @@ static int coresight_enable_sink(struct coresight_device *csdev, ...@@ -225,26 +225,28 @@ static int coresight_enable_sink(struct coresight_device *csdev,
* We need to make sure the "new" session is compatible with the * We need to make sure the "new" session is compatible with the
* existing "mode" of operation. * existing "mode" of operation.
*/ */
if (sink_ops(csdev)->enable) { if (!sink_ops(csdev)->enable)
ret = sink_ops(csdev)->enable(csdev, mode, data); return -EINVAL;
if (ret)
return ret;
csdev->enable = true;
}
atomic_inc(csdev->refcnt); ret = sink_ops(csdev)->enable(csdev, mode, data);
if (ret)
return ret;
csdev->enable = true;
return 0; return 0;
} }
static void coresight_disable_sink(struct coresight_device *csdev) static void coresight_disable_sink(struct coresight_device *csdev)
{ {
if (atomic_dec_return(csdev->refcnt) == 0) { int ret;
if (sink_ops(csdev)->disable) {
sink_ops(csdev)->disable(csdev); if (!sink_ops(csdev)->disable)
csdev->enable = false; return;
}
} ret = sink_ops(csdev)->disable(csdev);
if (ret)
return;
csdev->enable = false;
} }
static int coresight_enable_link(struct coresight_device *csdev, static int coresight_enable_link(struct coresight_device *csdev,
...@@ -973,7 +975,6 @@ static void coresight_device_release(struct device *dev) ...@@ -973,7 +975,6 @@ static void coresight_device_release(struct device *dev)
{ {
struct coresight_device *csdev = to_coresight_device(dev); struct coresight_device *csdev = to_coresight_device(dev);
kfree(csdev->conns);
kfree(csdev->refcnt); kfree(csdev->refcnt);
kfree(csdev); kfree(csdev);
} }
......
...@@ -37,15 +37,21 @@ MODULE_DEVICE_TABLE(acpi, intel_th_acpi_ids); ...@@ -37,15 +37,21 @@ MODULE_DEVICE_TABLE(acpi, intel_th_acpi_ids);
static int intel_th_acpi_probe(struct platform_device *pdev) static int intel_th_acpi_probe(struct platform_device *pdev)
{ {
struct acpi_device *adev = ACPI_COMPANION(&pdev->dev); struct acpi_device *adev = ACPI_COMPANION(&pdev->dev);
struct resource resource[TH_MMIO_END];
const struct acpi_device_id *id; const struct acpi_device_id *id;
struct intel_th *th; struct intel_th *th;
int i, r;
id = acpi_match_device(intel_th_acpi_ids, &pdev->dev); id = acpi_match_device(intel_th_acpi_ids, &pdev->dev);
if (!id) if (!id)
return -ENODEV; return -ENODEV;
th = intel_th_alloc(&pdev->dev, (void *)id->driver_data, for (i = 0, r = 0; i < pdev->num_resources && r < TH_MMIO_END; i++)
pdev->resource, pdev->num_resources, -1); if (pdev->resource[i].flags &
(IORESOURCE_IRQ | IORESOURCE_MEM))
resource[r++] = pdev->resource[i];
th = intel_th_alloc(&pdev->dev, (void *)id->driver_data, resource, r);
if (IS_ERR(th)) if (IS_ERR(th))
return PTR_ERR(th); return PTR_ERR(th);
......
...@@ -430,9 +430,9 @@ static const struct intel_th_subdevice { ...@@ -430,9 +430,9 @@ static const struct intel_th_subdevice {
.nres = 1, .nres = 1,
.res = { .res = {
{ {
/* Handle TSCU from GTH driver */ /* Handle TSCU and CTS from GTH driver */
.start = REG_GTH_OFFSET, .start = REG_GTH_OFFSET,
.end = REG_TSCU_OFFSET + REG_TSCU_LENGTH - 1, .end = REG_CTS_OFFSET + REG_CTS_LENGTH - 1,
.flags = IORESOURCE_MEM, .flags = IORESOURCE_MEM,
}, },
}, },
...@@ -491,7 +491,7 @@ static const struct intel_th_subdevice { ...@@ -491,7 +491,7 @@ static const struct intel_th_subdevice {
.flags = IORESOURCE_MEM, .flags = IORESOURCE_MEM,
}, },
{ {
.start = 1, /* use resource[1] */ .start = TH_MMIO_SW,
.end = 0, .end = 0,
.flags = IORESOURCE_MEM, .flags = IORESOURCE_MEM,
}, },
...@@ -500,6 +500,24 @@ static const struct intel_th_subdevice { ...@@ -500,6 +500,24 @@ static const struct intel_th_subdevice {
.name = "sth", .name = "sth",
.type = INTEL_TH_SOURCE, .type = INTEL_TH_SOURCE,
}, },
{
.nres = 2,
.res = {
{
.start = REG_STH_OFFSET,
.end = REG_STH_OFFSET + REG_STH_LENGTH - 1,
.flags = IORESOURCE_MEM,
},
{
.start = TH_MMIO_RTIT,
.end = 0,
.flags = IORESOURCE_MEM,
},
},
.id = -1,
.name = "rtit",
.type = INTEL_TH_SOURCE,
},
{ {
.nres = 1, .nres = 1,
.res = { .res = {
...@@ -584,7 +602,6 @@ intel_th_subdevice_alloc(struct intel_th *th, ...@@ -584,7 +602,6 @@ intel_th_subdevice_alloc(struct intel_th *th,
struct intel_th_device *thdev; struct intel_th_device *thdev;
struct resource res[3]; struct resource res[3];
unsigned int req = 0; unsigned int req = 0;
bool is64bit = false;
int r, err; int r, err;
thdev = intel_th_device_alloc(th, subdev->type, subdev->name, thdev = intel_th_device_alloc(th, subdev->type, subdev->name,
...@@ -594,18 +611,12 @@ intel_th_subdevice_alloc(struct intel_th *th, ...@@ -594,18 +611,12 @@ intel_th_subdevice_alloc(struct intel_th *th,
thdev->drvdata = th->drvdata; thdev->drvdata = th->drvdata;
for (r = 0; r < th->num_resources; r++)
if (th->resource[r].flags & IORESOURCE_MEM_64) {
is64bit = true;
break;
}
memcpy(res, subdev->res, memcpy(res, subdev->res,
sizeof(struct resource) * subdev->nres); sizeof(struct resource) * subdev->nres);
for (r = 0; r < subdev->nres; r++) { for (r = 0; r < subdev->nres; r++) {
struct resource *devres = th->resource; struct resource *devres = th->resource;
int bar = 0; /* cut subdevices' MMIO from resource[0] */ int bar = TH_MMIO_CONFIG;
/* /*
* Take .end == 0 to mean 'take the whole bar', * Take .end == 0 to mean 'take the whole bar',
...@@ -614,8 +625,9 @@ intel_th_subdevice_alloc(struct intel_th *th, ...@@ -614,8 +625,9 @@ intel_th_subdevice_alloc(struct intel_th *th,
*/ */
if (!res[r].end && res[r].flags == IORESOURCE_MEM) { if (!res[r].end && res[r].flags == IORESOURCE_MEM) {
bar = res[r].start; bar = res[r].start;
if (is64bit) err = -ENODEV;
bar *= 2; if (bar >= th->num_resources)
goto fail_put_device;
res[r].start = 0; res[r].start = 0;
res[r].end = resource_size(&devres[bar]) - 1; res[r].end = resource_size(&devres[bar]) - 1;
} }
...@@ -627,7 +639,12 @@ intel_th_subdevice_alloc(struct intel_th *th, ...@@ -627,7 +639,12 @@ intel_th_subdevice_alloc(struct intel_th *th,
dev_dbg(th->dev, "%s:%d @ %pR\n", dev_dbg(th->dev, "%s:%d @ %pR\n",
subdev->name, r, &res[r]); subdev->name, r, &res[r]);
} else if (res[r].flags & IORESOURCE_IRQ) { } else if (res[r].flags & IORESOURCE_IRQ) {
res[r].start = th->irq; /*
* Only pass on the IRQ if we have useful interrupts:
* the ones that can be configured via MINTCTL.
*/
if (INTEL_TH_CAP(th, has_mintctl) && th->irq != -1)
res[r].start = th->irq;
} }
} }
...@@ -758,8 +775,13 @@ static int intel_th_populate(struct intel_th *th) ...@@ -758,8 +775,13 @@ static int intel_th_populate(struct intel_th *th)
thdev = intel_th_subdevice_alloc(th, subdev); thdev = intel_th_subdevice_alloc(th, subdev);
/* note: caller should free subdevices from th::thdev[] */ /* note: caller should free subdevices from th::thdev[] */
if (IS_ERR(thdev)) if (IS_ERR(thdev)) {
/* ENODEV for individual subdevices is allowed */
if (PTR_ERR(thdev) == -ENODEV)
continue;
return PTR_ERR(thdev); return PTR_ERR(thdev);
}
th->thdev[th->num_thdevs++] = thdev; th->thdev[th->num_thdevs++] = thdev;
} }
...@@ -809,26 +831,40 @@ static const struct file_operations intel_th_output_fops = { ...@@ -809,26 +831,40 @@ static const struct file_operations intel_th_output_fops = {
.llseek = noop_llseek, .llseek = noop_llseek,
}; };
static irqreturn_t intel_th_irq(int irq, void *data)
{
struct intel_th *th = data;
irqreturn_t ret = IRQ_NONE;
struct intel_th_driver *d;
int i;
for (i = 0; i < th->num_thdevs; i++) {
if (th->thdev[i]->type != INTEL_TH_OUTPUT)
continue;
d = to_intel_th_driver(th->thdev[i]->dev.driver);
if (d && d->irq)
ret |= d->irq(th->thdev[i]);
}
if (ret == IRQ_NONE)
pr_warn_ratelimited("nobody cared for irq\n");
return ret;
}
/** /**
* intel_th_alloc() - allocate a new Intel TH device and its subdevices * intel_th_alloc() - allocate a new Intel TH device and its subdevices
* @dev: parent device * @dev: parent device
* @devres: parent's resources * @devres: resources indexed by th_mmio_idx
* @ndevres: number of resources
* @irq: irq number * @irq: irq number
*/ */
struct intel_th * struct intel_th *
intel_th_alloc(struct device *dev, struct intel_th_drvdata *drvdata, intel_th_alloc(struct device *dev, struct intel_th_drvdata *drvdata,
struct resource *devres, unsigned int ndevres, int irq) struct resource *devres, unsigned int ndevres)
{ {
int err, r, nr_mmios = 0;
struct intel_th *th; struct intel_th *th;
int err, r;
if (irq == -1)
for (r = 0; r < ndevres; r++)
if (devres[r].flags & IORESOURCE_IRQ) {
irq = devres[r].start;
break;
}
th = kzalloc(sizeof(*th), GFP_KERNEL); th = kzalloc(sizeof(*th), GFP_KERNEL);
if (!th) if (!th)
...@@ -846,12 +882,32 @@ intel_th_alloc(struct device *dev, struct intel_th_drvdata *drvdata, ...@@ -846,12 +882,32 @@ intel_th_alloc(struct device *dev, struct intel_th_drvdata *drvdata,
err = th->major; err = th->major;
goto err_ida; goto err_ida;
} }
th->irq = -1;
th->dev = dev; th->dev = dev;
th->drvdata = drvdata; th->drvdata = drvdata;
th->resource = devres; for (r = 0; r < ndevres; r++)
th->num_resources = ndevres; switch (devres[r].flags & IORESOURCE_TYPE_BITS) {
th->irq = irq; case IORESOURCE_MEM:
th->resource[nr_mmios++] = devres[r];
break;
case IORESOURCE_IRQ:
err = devm_request_irq(dev, devres[r].start,
intel_th_irq, IRQF_SHARED,
dev_name(dev), th);
if (err)
goto err_chrdev;
if (th->irq == -1)
th->irq = devres[r].start;
break;
default:
dev_warn(dev, "Unknown resource type %lx\n",
devres[r].flags);
break;
}
th->num_resources = nr_mmios;
dev_set_drvdata(dev, th); dev_set_drvdata(dev, th);
...@@ -868,6 +924,10 @@ intel_th_alloc(struct device *dev, struct intel_th_drvdata *drvdata, ...@@ -868,6 +924,10 @@ intel_th_alloc(struct device *dev, struct intel_th_drvdata *drvdata,
return th; return th;
err_chrdev:
__unregister_chrdev(th->major, 0, TH_POSSIBLE_OUTPUTS,
"intel_th/output");
err_ida: err_ida:
ida_simple_remove(&intel_th_ida, th->id); ida_simple_remove(&intel_th_ida, th->id);
...@@ -927,6 +987,27 @@ int intel_th_trace_enable(struct intel_th_device *thdev) ...@@ -927,6 +987,27 @@ int intel_th_trace_enable(struct intel_th_device *thdev)
} }
EXPORT_SYMBOL_GPL(intel_th_trace_enable); EXPORT_SYMBOL_GPL(intel_th_trace_enable);
/**
* intel_th_trace_switch() - execute a switch sequence
* @thdev: output device that requests tracing switch
*/
int intel_th_trace_switch(struct intel_th_device *thdev)
{
struct intel_th_device *hub = to_intel_th_device(thdev->dev.parent);
struct intel_th_driver *hubdrv = to_intel_th_driver(hub->dev.driver);
if (WARN_ON_ONCE(hub->type != INTEL_TH_SWITCH))
return -EINVAL;
if (WARN_ON_ONCE(thdev->type != INTEL_TH_OUTPUT))
return -EINVAL;
hubdrv->trig_switch(hub, &thdev->output);
return 0;
}
EXPORT_SYMBOL_GPL(intel_th_trace_switch);
/** /**
* intel_th_trace_disable() - disable tracing for an output device * intel_th_trace_disable() - disable tracing for an output device
* @thdev: output device that requests tracing be disabled * @thdev: output device that requests tracing be disabled
......
...@@ -308,6 +308,11 @@ static int intel_th_gth_reset(struct gth_device *gth) ...@@ -308,6 +308,11 @@ static int intel_th_gth_reset(struct gth_device *gth)
iowrite32(0, gth->base + REG_GTH_SCR); iowrite32(0, gth->base + REG_GTH_SCR);
iowrite32(0xfc, gth->base + REG_GTH_SCR2); iowrite32(0xfc, gth->base + REG_GTH_SCR2);
/* setup CTS for single trigger */
iowrite32(CTS_EVENT_ENABLE_IF_ANYTHING, gth->base + REG_CTS_C0S0_EN);
iowrite32(CTS_ACTION_CONTROL_SET_STATE(CTS_STATE_IDLE) |
CTS_ACTION_CONTROL_TRIGGER, gth->base + REG_CTS_C0S0_ACT);
return 0; return 0;
} }
...@@ -456,6 +461,68 @@ static int intel_th_output_attributes(struct gth_device *gth) ...@@ -456,6 +461,68 @@ static int intel_th_output_attributes(struct gth_device *gth)
return sysfs_create_group(&gth->dev->kobj, &gth->output_group); return sysfs_create_group(&gth->dev->kobj, &gth->output_group);
} }
/**
* intel_th_gth_stop() - stop tracing to an output device
* @gth: GTH device
* @output: output device's descriptor
* @capture_done: set when no more traces will be captured
*
* This will stop tracing using force storeEn off signal and wait for the
* pipelines to be empty for the corresponding output port.
*/
static void intel_th_gth_stop(struct gth_device *gth,
struct intel_th_output *output,
bool capture_done)
{
struct intel_th_device *outdev =
container_of(output, struct intel_th_device, output);
struct intel_th_driver *outdrv =
to_intel_th_driver(outdev->dev.driver);
unsigned long count;
u32 reg;
u32 scr2 = 0xfc | (capture_done ? 1 : 0);
iowrite32(0, gth->base + REG_GTH_SCR);
iowrite32(scr2, gth->base + REG_GTH_SCR2);
/* wait on pipeline empty for the given port */
for (reg = 0, count = GTH_PLE_WAITLOOP_DEPTH;
count && !(reg & BIT(output->port)); count--) {
reg = ioread32(gth->base + REG_GTH_STAT);
cpu_relax();
}
if (!count)
dev_dbg(gth->dev, "timeout waiting for GTH[%d] PLE\n",
output->port);
/* wait on output piepline empty */
if (outdrv->wait_empty)
outdrv->wait_empty(outdev);
/* clear force capture done for next captures */
iowrite32(0xfc, gth->base + REG_GTH_SCR2);
}
/**
* intel_th_gth_start() - start tracing to an output device
* @gth: GTH device
* @output: output device's descriptor
*
* This will start tracing using force storeEn signal.
*/
static void intel_th_gth_start(struct gth_device *gth,
struct intel_th_output *output)
{
u32 scr = 0xfc0000;
if (output->multiblock)
scr |= 0xff;
iowrite32(scr, gth->base + REG_GTH_SCR);
iowrite32(0, gth->base + REG_GTH_SCR2);
}
/** /**
* intel_th_gth_disable() - disable tracing to an output device * intel_th_gth_disable() - disable tracing to an output device
* @thdev: GTH device * @thdev: GTH device
...@@ -469,7 +536,6 @@ static void intel_th_gth_disable(struct intel_th_device *thdev, ...@@ -469,7 +536,6 @@ static void intel_th_gth_disable(struct intel_th_device *thdev,
struct intel_th_output *output) struct intel_th_output *output)
{ {
struct gth_device *gth = dev_get_drvdata(&thdev->dev); struct gth_device *gth = dev_get_drvdata(&thdev->dev);
unsigned long count;
int master; int master;
u32 reg; u32 reg;
...@@ -482,22 +548,7 @@ static void intel_th_gth_disable(struct intel_th_device *thdev, ...@@ -482,22 +548,7 @@ static void intel_th_gth_disable(struct intel_th_device *thdev,
} }
spin_unlock(&gth->gth_lock); spin_unlock(&gth->gth_lock);
iowrite32(0, gth->base + REG_GTH_SCR); intel_th_gth_stop(gth, output, true);
iowrite32(0xfd, gth->base + REG_GTH_SCR2);
/* wait on pipeline empty for the given port */
for (reg = 0, count = GTH_PLE_WAITLOOP_DEPTH;
count && !(reg & BIT(output->port)); count--) {
reg = ioread32(gth->base + REG_GTH_STAT);
cpu_relax();
}
/* clear force capture done for next captures */
iowrite32(0xfc, gth->base + REG_GTH_SCR2);
if (!count)
dev_dbg(&thdev->dev, "timeout waiting for GTH[%d] PLE\n",
output->port);
reg = ioread32(gth->base + REG_GTH_SCRPD0); reg = ioread32(gth->base + REG_GTH_SCRPD0);
reg &= ~output->scratchpad; reg &= ~output->scratchpad;
...@@ -526,8 +577,8 @@ static void intel_th_gth_enable(struct intel_th_device *thdev, ...@@ -526,8 +577,8 @@ static void intel_th_gth_enable(struct intel_th_device *thdev,
{ {
struct gth_device *gth = dev_get_drvdata(&thdev->dev); struct gth_device *gth = dev_get_drvdata(&thdev->dev);
struct intel_th *th = to_intel_th(thdev); struct intel_th *th = to_intel_th(thdev);
u32 scr = 0xfc0000, scrpd;
int master; int master;
u32 scrpd;
spin_lock(&gth->gth_lock); spin_lock(&gth->gth_lock);
for_each_set_bit(master, gth->output[output->port].master, for_each_set_bit(master, gth->output[output->port].master,
...@@ -535,9 +586,6 @@ static void intel_th_gth_enable(struct intel_th_device *thdev, ...@@ -535,9 +586,6 @@ static void intel_th_gth_enable(struct intel_th_device *thdev,
gth_master_set(gth, master, output->port); gth_master_set(gth, master, output->port);
} }
if (output->multiblock)
scr |= 0xff;
output->active = true; output->active = true;
spin_unlock(&gth->gth_lock); spin_unlock(&gth->gth_lock);
...@@ -548,8 +596,38 @@ static void intel_th_gth_enable(struct intel_th_device *thdev, ...@@ -548,8 +596,38 @@ static void intel_th_gth_enable(struct intel_th_device *thdev,
scrpd |= output->scratchpad; scrpd |= output->scratchpad;
iowrite32(scrpd, gth->base + REG_GTH_SCRPD0); iowrite32(scrpd, gth->base + REG_GTH_SCRPD0);
iowrite32(scr, gth->base + REG_GTH_SCR); intel_th_gth_start(gth, output);
iowrite32(0, gth->base + REG_GTH_SCR2); }
/**
* intel_th_gth_switch() - execute a switch sequence
* @thdev: GTH device
* @output: output device's descriptor
*
* This will execute a switch sequence that will trigger a switch window
* when tracing to MSC in multi-block mode.
*/
static void intel_th_gth_switch(struct intel_th_device *thdev,
struct intel_th_output *output)
{
struct gth_device *gth = dev_get_drvdata(&thdev->dev);
unsigned long count;
u32 reg;
/* trigger */
iowrite32(0, gth->base + REG_CTS_CTL);
iowrite32(CTS_CTL_SEQUENCER_ENABLE, gth->base + REG_CTS_CTL);
/* wait on trigger status */
for (reg = 0, count = CTS_TRIG_WAITLOOP_DEPTH;
count && !(reg & BIT(4)); count--) {
reg = ioread32(gth->base + REG_CTS_STAT);
cpu_relax();
}
if (!count)
dev_dbg(&thdev->dev, "timeout waiting for CTS Trigger\n");
intel_th_gth_stop(gth, output, false);
intel_th_gth_start(gth, output);
} }
/** /**
...@@ -735,6 +813,7 @@ static struct intel_th_driver intel_th_gth_driver = { ...@@ -735,6 +813,7 @@ static struct intel_th_driver intel_th_gth_driver = {
.unassign = intel_th_gth_unassign, .unassign = intel_th_gth_unassign,
.set_output = intel_th_gth_set_output, .set_output = intel_th_gth_set_output,
.enable = intel_th_gth_enable, .enable = intel_th_gth_enable,
.trig_switch = intel_th_gth_switch,
.disable = intel_th_gth_disable, .disable = intel_th_gth_disable,
.driver = { .driver = {
.name = "gth", .name = "gth",
......
...@@ -49,6 +49,12 @@ enum { ...@@ -49,6 +49,12 @@ enum {
REG_GTH_SCRPD3 = 0xec, /* ScratchPad[3] */ REG_GTH_SCRPD3 = 0xec, /* ScratchPad[3] */
REG_TSCU_TSUCTRL = 0x2000, /* TSCU control register */ REG_TSCU_TSUCTRL = 0x2000, /* TSCU control register */
REG_TSCU_TSCUSTAT = 0x2004, /* TSCU status register */ REG_TSCU_TSCUSTAT = 0x2004, /* TSCU status register */
/* Common Capture Sequencer (CTS) registers */
REG_CTS_C0S0_EN = 0x30c0, /* clause_event_enable_c0s0 */
REG_CTS_C0S0_ACT = 0x3180, /* clause_action_control_c0s0 */
REG_CTS_STAT = 0x32a0, /* cts_status */
REG_CTS_CTL = 0x32a4, /* cts_control */
}; };
/* waiting for Pipeline Empty bit(s) to assert for GTH */ /* waiting for Pipeline Empty bit(s) to assert for GTH */
...@@ -57,4 +63,17 @@ enum { ...@@ -57,4 +63,17 @@ enum {
#define TSUCTRL_CTCRESYNC BIT(0) #define TSUCTRL_CTCRESYNC BIT(0)
#define TSCUSTAT_CTCSYNCING BIT(1) #define TSCUSTAT_CTCSYNCING BIT(1)
/* waiting for Trigger status to assert for CTS */
#define CTS_TRIG_WAITLOOP_DEPTH 10000
#define CTS_EVENT_ENABLE_IF_ANYTHING BIT(31)
#define CTS_ACTION_CONTROL_STATE_OFF 27
#define CTS_ACTION_CONTROL_SET_STATE(x) \
(((x) & 0x1f) << CTS_ACTION_CONTROL_STATE_OFF)
#define CTS_ACTION_CONTROL_TRIGGER BIT(4)
#define CTS_STATE_IDLE 0x10u
#define CTS_CTL_SEQUENCER_ENABLE BIT(0)
#endif /* __INTEL_TH_GTH_H__ */ #endif /* __INTEL_TH_GTH_H__ */
...@@ -8,6 +8,8 @@ ...@@ -8,6 +8,8 @@
#ifndef __INTEL_TH_H__ #ifndef __INTEL_TH_H__
#define __INTEL_TH_H__ #define __INTEL_TH_H__
#include <linux/irqreturn.h>
/* intel_th_device device types */ /* intel_th_device device types */
enum { enum {
/* Devices that generate trace data */ /* Devices that generate trace data */
...@@ -18,6 +20,8 @@ enum { ...@@ -18,6 +20,8 @@ enum {
INTEL_TH_SWITCH, INTEL_TH_SWITCH,
}; };
struct intel_th_device;
/** /**
* struct intel_th_output - descriptor INTEL_TH_OUTPUT type devices * struct intel_th_output - descriptor INTEL_TH_OUTPUT type devices
* @port: output port number, assigned by the switch * @port: output port number, assigned by the switch
...@@ -25,6 +29,7 @@ enum { ...@@ -25,6 +29,7 @@ enum {
* @scratchpad: scratchpad bits to flag when this output is enabled * @scratchpad: scratchpad bits to flag when this output is enabled
* @multiblock: true for multiblock output configuration * @multiblock: true for multiblock output configuration
* @active: true when this output is enabled * @active: true when this output is enabled
* @wait_empty: wait for device pipeline to be empty
* *
* Output port descriptor, used by switch driver to tell which output * Output port descriptor, used by switch driver to tell which output
* port this output device corresponds to. Filled in at output device's * port this output device corresponds to. Filled in at output device's
...@@ -42,10 +47,12 @@ struct intel_th_output { ...@@ -42,10 +47,12 @@ struct intel_th_output {
/** /**
* struct intel_th_drvdata - describes hardware capabilities and quirks * struct intel_th_drvdata - describes hardware capabilities and quirks
* @tscu_enable: device needs SW to enable time stamping unit * @tscu_enable: device needs SW to enable time stamping unit
* @has_mintctl: device has interrupt control (MINTCTL) register
* @host_mode_only: device can only operate in 'host debugger' mode * @host_mode_only: device can only operate in 'host debugger' mode
*/ */
struct intel_th_drvdata { struct intel_th_drvdata {
unsigned int tscu_enable : 1, unsigned int tscu_enable : 1,
has_mintctl : 1,
host_mode_only : 1; host_mode_only : 1;
}; };
...@@ -157,10 +164,13 @@ struct intel_th_driver { ...@@ -157,10 +164,13 @@ struct intel_th_driver {
struct intel_th_device *othdev); struct intel_th_device *othdev);
void (*enable)(struct intel_th_device *thdev, void (*enable)(struct intel_th_device *thdev,
struct intel_th_output *output); struct intel_th_output *output);
void (*trig_switch)(struct intel_th_device *thdev,
struct intel_th_output *output);
void (*disable)(struct intel_th_device *thdev, void (*disable)(struct intel_th_device *thdev,
struct intel_th_output *output); struct intel_th_output *output);
/* output ops */ /* output ops */
void (*irq)(struct intel_th_device *thdev); irqreturn_t (*irq)(struct intel_th_device *thdev);
void (*wait_empty)(struct intel_th_device *thdev);
int (*activate)(struct intel_th_device *thdev); int (*activate)(struct intel_th_device *thdev);
void (*deactivate)(struct intel_th_device *thdev); void (*deactivate)(struct intel_th_device *thdev);
/* file_operations for those who want a device node */ /* file_operations for those who want a device node */
...@@ -213,21 +223,23 @@ static inline struct intel_th *to_intel_th(struct intel_th_device *thdev) ...@@ -213,21 +223,23 @@ static inline struct intel_th *to_intel_th(struct intel_th_device *thdev)
struct intel_th * struct intel_th *
intel_th_alloc(struct device *dev, struct intel_th_drvdata *drvdata, intel_th_alloc(struct device *dev, struct intel_th_drvdata *drvdata,
struct resource *devres, unsigned int ndevres, int irq); struct resource *devres, unsigned int ndevres);
void intel_th_free(struct intel_th *th); void intel_th_free(struct intel_th *th);
int intel_th_driver_register(struct intel_th_driver *thdrv); int intel_th_driver_register(struct intel_th_driver *thdrv);
void intel_th_driver_unregister(struct intel_th_driver *thdrv); void intel_th_driver_unregister(struct intel_th_driver *thdrv);
int intel_th_trace_enable(struct intel_th_device *thdev); int intel_th_trace_enable(struct intel_th_device *thdev);
int intel_th_trace_switch(struct intel_th_device *thdev);
int intel_th_trace_disable(struct intel_th_device *thdev); int intel_th_trace_disable(struct intel_th_device *thdev);
int intel_th_set_output(struct intel_th_device *thdev, int intel_th_set_output(struct intel_th_device *thdev,
unsigned int master); unsigned int master);
int intel_th_output_enable(struct intel_th *th, unsigned int otype); int intel_th_output_enable(struct intel_th *th, unsigned int otype);
enum { enum th_mmio_idx {
TH_MMIO_CONFIG = 0, TH_MMIO_CONFIG = 0,
TH_MMIO_SW = 2, TH_MMIO_SW = 1,
TH_MMIO_RTIT = 2,
TH_MMIO_END, TH_MMIO_END,
}; };
...@@ -237,6 +249,9 @@ enum { ...@@ -237,6 +249,9 @@ enum {
#define TH_CONFIGURABLE_MASTERS 256 #define TH_CONFIGURABLE_MASTERS 256
#define TH_MSC_MAX 2 #define TH_MSC_MAX 2
/* Maximum IRQ vectors */
#define TH_NVEC_MAX 8
/** /**
* struct intel_th - Intel TH controller * struct intel_th - Intel TH controller
* @dev: driver core's device * @dev: driver core's device
...@@ -244,7 +259,7 @@ enum { ...@@ -244,7 +259,7 @@ enum {
* @hub: "switch" subdevice (GTH) * @hub: "switch" subdevice (GTH)
* @resource: resources of the entire controller * @resource: resources of the entire controller
* @num_thdevs: number of devices in the @thdev array * @num_thdevs: number of devices in the @thdev array
* @num_resources: number or resources in the @resource array * @num_resources: number of resources in the @resource array
* @irq: irq number * @irq: irq number
* @id: this Intel TH controller's device ID in the system * @id: this Intel TH controller's device ID in the system
* @major: device node major for output devices * @major: device node major for output devices
...@@ -256,7 +271,7 @@ struct intel_th { ...@@ -256,7 +271,7 @@ struct intel_th {
struct intel_th_device *hub; struct intel_th_device *hub;
struct intel_th_drvdata *drvdata; struct intel_th_drvdata *drvdata;
struct resource *resource; struct resource resource[TH_MMIO_END];
int (*activate)(struct intel_th *); int (*activate)(struct intel_th *);
void (*deactivate)(struct intel_th *); void (*deactivate)(struct intel_th *);
unsigned int num_thdevs; unsigned int num_thdevs;
...@@ -296,6 +311,9 @@ enum { ...@@ -296,6 +311,9 @@ enum {
REG_TSCU_OFFSET = 0x2000, REG_TSCU_OFFSET = 0x2000,
REG_TSCU_LENGTH = 0x1000, REG_TSCU_LENGTH = 0x1000,
REG_CTS_OFFSET = 0x3000,
REG_CTS_LENGTH = 0x1000,
/* Software Trace Hub (STH) [0x4000..0x4fff] */ /* Software Trace Hub (STH) [0x4000..0x4fff] */
REG_STH_OFFSET = 0x4000, REG_STH_OFFSET = 0x4000,
REG_STH_LENGTH = 0x2000, REG_STH_LENGTH = 0x2000,
......
This diff is collapsed.
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
enum { enum {
REG_MSU_MSUPARAMS = 0x0000, REG_MSU_MSUPARAMS = 0x0000,
REG_MSU_MSUSTS = 0x0008, REG_MSU_MSUSTS = 0x0008,
REG_MSU_MINTCTL = 0x0004, /* MSU-global interrupt control */
REG_MSU_MSC0CTL = 0x0100, /* MSC0 control */ REG_MSU_MSC0CTL = 0x0100, /* MSC0 control */
REG_MSU_MSC0STS = 0x0104, /* MSC0 status */ REG_MSU_MSC0STS = 0x0104, /* MSC0 status */
REG_MSU_MSC0BAR = 0x0108, /* MSC0 output base address */ REG_MSU_MSC0BAR = 0x0108, /* MSC0 output base address */
...@@ -28,6 +29,8 @@ enum { ...@@ -28,6 +29,8 @@ enum {
/* MSUSTS bits */ /* MSUSTS bits */
#define MSUSTS_MSU_INT BIT(0) #define MSUSTS_MSU_INT BIT(0)
#define MSUSTS_MSC0BLAST BIT(16)
#define MSUSTS_MSC1BLAST BIT(24)
/* MSCnCTL bits */ /* MSCnCTL bits */
#define MSC_EN BIT(0) #define MSC_EN BIT(0)
...@@ -36,6 +39,11 @@ enum { ...@@ -36,6 +39,11 @@ enum {
#define MSC_MODE (BIT(4) | BIT(5)) #define MSC_MODE (BIT(4) | BIT(5))
#define MSC_LEN (BIT(8) | BIT(9) | BIT(10)) #define MSC_LEN (BIT(8) | BIT(9) | BIT(10))
/* MINTCTL bits */
#define MICDE BIT(0)
#define M0BLIE BIT(16)
#define M1BLIE BIT(24)
/* MSC operating modes (MSC_MODE) */ /* MSC operating modes (MSC_MODE) */
enum { enum {
MSC_MODE_SINGLE = 0, MSC_MODE_SINGLE = 0,
...@@ -87,7 +95,7 @@ static inline unsigned long msc_data_sz(struct msc_block_desc *bdesc) ...@@ -87,7 +95,7 @@ static inline unsigned long msc_data_sz(struct msc_block_desc *bdesc)
static inline bool msc_block_wrapped(struct msc_block_desc *bdesc) static inline bool msc_block_wrapped(struct msc_block_desc *bdesc)
{ {
if (bdesc->hw_tag & MSC_HW_TAG_BLOCKWRAP) if (bdesc->hw_tag & (MSC_HW_TAG_BLOCKWRAP | MSC_HW_TAG_WINWRAP))
return true; return true;
return false; return false;
......
...@@ -17,7 +17,13 @@ ...@@ -17,7 +17,13 @@
#define DRIVER_NAME "intel_th_pci" #define DRIVER_NAME "intel_th_pci"
#define BAR_MASK (BIT(TH_MMIO_CONFIG) | BIT(TH_MMIO_SW)) enum {
TH_PCI_CONFIG_BAR = 0,
TH_PCI_STH_SW_BAR = 2,
TH_PCI_RTIT_BAR = 4,
};
#define BAR_MASK (BIT(TH_PCI_CONFIG_BAR) | BIT(TH_PCI_STH_SW_BAR))
#define PCI_REG_NPKDSC 0x80 #define PCI_REG_NPKDSC 0x80
#define NPKDSC_TSACT BIT(5) #define NPKDSC_TSACT BIT(5)
...@@ -66,8 +72,12 @@ static int intel_th_pci_probe(struct pci_dev *pdev, ...@@ -66,8 +72,12 @@ static int intel_th_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *id) const struct pci_device_id *id)
{ {
struct intel_th_drvdata *drvdata = (void *)id->driver_data; struct intel_th_drvdata *drvdata = (void *)id->driver_data;
struct resource resource[TH_MMIO_END + TH_NVEC_MAX] = {
[TH_MMIO_CONFIG] = pdev->resource[TH_PCI_CONFIG_BAR],
[TH_MMIO_SW] = pdev->resource[TH_PCI_STH_SW_BAR],
};
int err, r = TH_MMIO_SW + 1, i;
struct intel_th *th; struct intel_th *th;
int err;
err = pcim_enable_device(pdev); err = pcim_enable_device(pdev);
if (err) if (err)
...@@ -77,8 +87,19 @@ static int intel_th_pci_probe(struct pci_dev *pdev, ...@@ -77,8 +87,19 @@ static int intel_th_pci_probe(struct pci_dev *pdev,
if (err) if (err)
return err; return err;
th = intel_th_alloc(&pdev->dev, drvdata, pdev->resource, if (pdev->resource[TH_PCI_RTIT_BAR].start) {
DEVICE_COUNT_RESOURCE, pdev->irq); resource[TH_MMIO_RTIT] = pdev->resource[TH_PCI_RTIT_BAR];
r++;
}
err = pci_alloc_irq_vectors(pdev, 1, 8, PCI_IRQ_ALL_TYPES);
if (err > 0)
for (i = 0; i < err; i++, r++) {
resource[r].flags = IORESOURCE_IRQ;
resource[r].start = pci_irq_vector(pdev, i);
}
th = intel_th_alloc(&pdev->dev, drvdata, resource, r);
if (IS_ERR(th)) if (IS_ERR(th))
return PTR_ERR(th); return PTR_ERR(th);
...@@ -95,10 +116,13 @@ static void intel_th_pci_remove(struct pci_dev *pdev) ...@@ -95,10 +116,13 @@ static void intel_th_pci_remove(struct pci_dev *pdev)
struct intel_th *th = pci_get_drvdata(pdev); struct intel_th *th = pci_get_drvdata(pdev);
intel_th_free(th); intel_th_free(th);
pci_free_irq_vectors(pdev);
} }
static const struct intel_th_drvdata intel_th_2x = { static const struct intel_th_drvdata intel_th_2x = {
.tscu_enable = 1, .tscu_enable = 1,
.has_mintctl = 1,
}; };
static const struct pci_device_id intel_th_pci_id_table[] = { static const struct pci_device_id intel_th_pci_id_table[] = {
......
...@@ -90,18 +90,7 @@ static int icc_summary_show(struct seq_file *s, void *data) ...@@ -90,18 +90,7 @@ static int icc_summary_show(struct seq_file *s, void *data)
return 0; return 0;
} }
DEFINE_SHOW_ATTRIBUTE(icc_summary);
static int icc_summary_open(struct inode *inode, struct file *file)
{
return single_open(file, icc_summary_show, inode->i_private);
}
static const struct file_operations icc_summary_fops = {
.open = icc_summary_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static struct icc_node *node_find(const int id) static struct icc_node *node_find(const int id)
{ {
......
...@@ -496,6 +496,14 @@ config VEXPRESS_SYSCFG ...@@ -496,6 +496,14 @@ config VEXPRESS_SYSCFG
bus. System Configuration interface is one of the possible means bus. System Configuration interface is one of the possible means
of generating transactions on this bus. of generating transactions on this bus.
config ASPEED_P2A_CTRL
depends on (ARCH_ASPEED || COMPILE_TEST) && REGMAP && MFD_SYSCON
tristate "Aspeed ast2400/2500 HOST P2A VGA MMIO to BMC bridge control"
help
Control Aspeed ast2400/2500 HOST P2A VGA MMIO to BMC mappings through
ioctl()s, the driver also provides an interface for userspace mappings to
a pre-defined region.
config ASPEED_LPC_CTRL config ASPEED_LPC_CTRL
depends on (ARCH_ASPEED || COMPILE_TEST) && REGMAP && MFD_SYSCON depends on (ARCH_ASPEED || COMPILE_TEST) && REGMAP && MFD_SYSCON
tristate "Aspeed ast2400/2500 HOST LPC to BMC bridge control" tristate "Aspeed ast2400/2500 HOST LPC to BMC bridge control"
......
...@@ -56,6 +56,7 @@ obj-$(CONFIG_VEXPRESS_SYSCFG) += vexpress-syscfg.o ...@@ -56,6 +56,7 @@ obj-$(CONFIG_VEXPRESS_SYSCFG) += vexpress-syscfg.o
obj-$(CONFIG_CXL_BASE) += cxl/ obj-$(CONFIG_CXL_BASE) += cxl/
obj-$(CONFIG_ASPEED_LPC_CTRL) += aspeed-lpc-ctrl.o obj-$(CONFIG_ASPEED_LPC_CTRL) += aspeed-lpc-ctrl.o
obj-$(CONFIG_ASPEED_LPC_SNOOP) += aspeed-lpc-snoop.o obj-$(CONFIG_ASPEED_LPC_SNOOP) += aspeed-lpc-snoop.o
obj-$(CONFIG_ASPEED_P2A_CTRL) += aspeed-p2a-ctrl.o
obj-$(CONFIG_PCI_ENDPOINT_TEST) += pci_endpoint_test.o obj-$(CONFIG_PCI_ENDPOINT_TEST) += pci_endpoint_test.o
obj-$(CONFIG_OCXL) += ocxl/ obj-$(CONFIG_OCXL) += ocxl/
obj-y += cardreader/ obj-y += cardreader/
......
This diff is collapsed.
...@@ -456,13 +456,13 @@ static void rts5260_pwr_saving_setting(struct rtsx_pcr *pcr) ...@@ -456,13 +456,13 @@ static void rts5260_pwr_saving_setting(struct rtsx_pcr *pcr)
pcr_dbg(pcr, "Set parameters for L1.2."); pcr_dbg(pcr, "Set parameters for L1.2.");
rtsx_pci_write_register(pcr, PWR_GLOBAL_CTRL, rtsx_pci_write_register(pcr, PWR_GLOBAL_CTRL,
0xFF, PCIE_L1_2_EN); 0xFF, PCIE_L1_2_EN);
rtsx_pci_write_register(pcr, RTS5260_DVCC_CTRL, rtsx_pci_write_register(pcr, RTS5260_DVCC_CTRL,
RTS5260_DVCC_OCP_EN | RTS5260_DVCC_OCP_EN |
RTS5260_DVCC_OCP_CL_EN, RTS5260_DVCC_OCP_CL_EN,
RTS5260_DVCC_OCP_EN | RTS5260_DVCC_OCP_EN |
RTS5260_DVCC_OCP_CL_EN); RTS5260_DVCC_OCP_CL_EN);
rtsx_pci_write_register(pcr, PWR_FE_CTL, rtsx_pci_write_register(pcr, PWR_FE_CTL,
0xFF, PCIE_L1_2_PD_FE_EN); 0xFF, PCIE_L1_2_PD_FE_EN);
} else if (lss_l1_1) { } else if (lss_l1_1) {
pcr_dbg(pcr, "Set parameters for L1.1."); pcr_dbg(pcr, "Set parameters for L1.1.");
......
This diff is collapsed.
...@@ -227,7 +227,7 @@ static int ddcb_info_show(struct seq_file *s, void *unused) ...@@ -227,7 +227,7 @@ static int ddcb_info_show(struct seq_file *s, void *unused)
seq_puts(s, "DDCB QUEUE:\n"); seq_puts(s, "DDCB QUEUE:\n");
seq_printf(s, " ddcb_max: %d\n" seq_printf(s, " ddcb_max: %d\n"
" ddcb_daddr: %016llx - %016llx\n" " ddcb_daddr: %016llx - %016llx\n"
" ddcb_vaddr: %016llx\n" " ddcb_vaddr: %p\n"
" ddcbs_in_flight: %u\n" " ddcbs_in_flight: %u\n"
" ddcbs_max_in_flight: %u\n" " ddcbs_max_in_flight: %u\n"
" ddcbs_completed: %u\n" " ddcbs_completed: %u\n"
...@@ -237,7 +237,7 @@ static int ddcb_info_show(struct seq_file *s, void *unused) ...@@ -237,7 +237,7 @@ static int ddcb_info_show(struct seq_file *s, void *unused)
queue->ddcb_max, (long long)queue->ddcb_daddr, queue->ddcb_max, (long long)queue->ddcb_daddr,
(long long)queue->ddcb_daddr + (long long)queue->ddcb_daddr +
(queue->ddcb_max * DDCB_LENGTH), (queue->ddcb_max * DDCB_LENGTH),
(long long)queue->ddcb_vaddr, queue->ddcbs_in_flight, queue->ddcb_vaddr, queue->ddcbs_in_flight,
queue->ddcbs_max_in_flight, queue->ddcbs_completed, queue->ddcbs_max_in_flight, queue->ddcbs_completed,
queue->return_on_busy, queue->wait_on_busy, queue->return_on_busy, queue->wait_on_busy,
cd->irqs_processed); cd->irqs_processed);
......
...@@ -6,7 +6,7 @@ obj-m := habanalabs.o ...@@ -6,7 +6,7 @@ obj-m := habanalabs.o
habanalabs-y := habanalabs_drv.o device.o context.o asid.o habanalabs_ioctl.o \ habanalabs-y := habanalabs_drv.o device.o context.o asid.o habanalabs_ioctl.o \
command_buffer.o hw_queue.o irq.o sysfs.o hwmon.o memory.o \ command_buffer.o hw_queue.o irq.o sysfs.o hwmon.o memory.o \
command_submission.o mmu.o command_submission.o mmu.o firmware_if.o pci.o
habanalabs-$(CONFIG_DEBUG_FS) += debugfs.o habanalabs-$(CONFIG_DEBUG_FS) += debugfs.o
......
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
static void cb_fini(struct hl_device *hdev, struct hl_cb *cb) static void cb_fini(struct hl_device *hdev, struct hl_cb *cb)
{ {
hdev->asic_funcs->dma_free_coherent(hdev, cb->size, hdev->asic_funcs->asic_dma_free_coherent(hdev, cb->size,
(void *) (uintptr_t) cb->kernel_address, (void *) (uintptr_t) cb->kernel_address,
cb->bus_address); cb->bus_address);
kfree(cb); kfree(cb);
...@@ -66,10 +66,10 @@ static struct hl_cb *hl_cb_alloc(struct hl_device *hdev, u32 cb_size, ...@@ -66,10 +66,10 @@ static struct hl_cb *hl_cb_alloc(struct hl_device *hdev, u32 cb_size,
return NULL; return NULL;
if (ctx_id == HL_KERNEL_ASID_ID) if (ctx_id == HL_KERNEL_ASID_ID)
p = hdev->asic_funcs->dma_alloc_coherent(hdev, cb_size, p = hdev->asic_funcs->asic_dma_alloc_coherent(hdev, cb_size,
&cb->bus_address, GFP_ATOMIC); &cb->bus_address, GFP_ATOMIC);
else else
p = hdev->asic_funcs->dma_alloc_coherent(hdev, cb_size, p = hdev->asic_funcs->asic_dma_alloc_coherent(hdev, cb_size,
&cb->bus_address, &cb->bus_address,
GFP_USER | __GFP_ZERO); GFP_USER | __GFP_ZERO);
if (!p) { if (!p) {
...@@ -214,6 +214,13 @@ int hl_cb_ioctl(struct hl_fpriv *hpriv, void *data) ...@@ -214,6 +214,13 @@ int hl_cb_ioctl(struct hl_fpriv *hpriv, void *data)
u64 handle; u64 handle;
int rc; int rc;
if (hl_device_disabled_or_in_reset(hdev)) {
dev_warn_ratelimited(hdev->dev,
"Device is %s. Can't execute CB IOCTL\n",
atomic_read(&hdev->in_reset) ? "in_reset" : "disabled");
return -EBUSY;
}
switch (args->in.op) { switch (args->in.op) {
case HL_CB_OP_CREATE: case HL_CB_OP_CREATE:
rc = hl_cb_create(hdev, &hpriv->cb_mgr, args->in.cb_size, rc = hl_cb_create(hdev, &hpriv->cb_mgr, args->in.cb_size,
......
...@@ -93,7 +93,6 @@ static int cs_parser(struct hl_fpriv *hpriv, struct hl_cs_job *job) ...@@ -93,7 +93,6 @@ static int cs_parser(struct hl_fpriv *hpriv, struct hl_cs_job *job)
parser.user_cb_size = job->user_cb_size; parser.user_cb_size = job->user_cb_size;
parser.ext_queue = job->ext_queue; parser.ext_queue = job->ext_queue;
job->patched_cb = NULL; job->patched_cb = NULL;
parser.use_virt_addr = hdev->mmu_enable;
rc = hdev->asic_funcs->cs_parser(hdev, &parser); rc = hdev->asic_funcs->cs_parser(hdev, &parser);
if (job->ext_queue) { if (job->ext_queue) {
...@@ -261,7 +260,8 @@ static void cs_timedout(struct work_struct *work) ...@@ -261,7 +260,8 @@ static void cs_timedout(struct work_struct *work)
ctx_asid = cs->ctx->asid; ctx_asid = cs->ctx->asid;
/* TODO: add information about last signaled seq and last emitted seq */ /* TODO: add information about last signaled seq and last emitted seq */
dev_err(hdev->dev, "CS %d.%llu got stuck!\n", ctx_asid, cs->sequence); dev_err(hdev->dev, "User %d command submission %llu got stuck!\n",
ctx_asid, cs->sequence);
cs_put(cs); cs_put(cs);
...@@ -600,20 +600,20 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data) ...@@ -600,20 +600,20 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data)
void __user *chunks; void __user *chunks;
u32 num_chunks; u32 num_chunks;
u64 cs_seq = ULONG_MAX; u64 cs_seq = ULONG_MAX;
int rc, do_restore; int rc, do_ctx_switch;
bool need_soft_reset = false; bool need_soft_reset = false;
if (hl_device_disabled_or_in_reset(hdev)) { if (hl_device_disabled_or_in_reset(hdev)) {
dev_warn(hdev->dev, dev_warn_ratelimited(hdev->dev,
"Device is %s. Can't submit new CS\n", "Device is %s. Can't submit new CS\n",
atomic_read(&hdev->in_reset) ? "in_reset" : "disabled"); atomic_read(&hdev->in_reset) ? "in_reset" : "disabled");
rc = -EBUSY; rc = -EBUSY;
goto out; goto out;
} }
do_restore = atomic_cmpxchg(&ctx->thread_restore_token, 1, 0); do_ctx_switch = atomic_cmpxchg(&ctx->thread_ctx_switch_token, 1, 0);
if (do_restore || (args->in.cs_flags & HL_CS_FLAGS_FORCE_RESTORE)) { if (do_ctx_switch || (args->in.cs_flags & HL_CS_FLAGS_FORCE_RESTORE)) {
long ret; long ret;
chunks = (void __user *)(uintptr_t)args->in.chunks_restore; chunks = (void __user *)(uintptr_t)args->in.chunks_restore;
...@@ -621,7 +621,7 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data) ...@@ -621,7 +621,7 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data)
mutex_lock(&hpriv->restore_phase_mutex); mutex_lock(&hpriv->restore_phase_mutex);
if (do_restore) { if (do_ctx_switch) {
rc = hdev->asic_funcs->context_switch(hdev, ctx->asid); rc = hdev->asic_funcs->context_switch(hdev, ctx->asid);
if (rc) { if (rc) {
dev_err_ratelimited(hdev->dev, dev_err_ratelimited(hdev->dev,
...@@ -677,18 +677,18 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data) ...@@ -677,18 +677,18 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data)
} }
} }
ctx->thread_restore_wait_token = 1; ctx->thread_ctx_switch_wait_token = 1;
} else if (!ctx->thread_restore_wait_token) { } else if (!ctx->thread_ctx_switch_wait_token) {
u32 tmp; u32 tmp;
rc = hl_poll_timeout_memory(hdev, rc = hl_poll_timeout_memory(hdev,
(u64) (uintptr_t) &ctx->thread_restore_wait_token, (u64) (uintptr_t) &ctx->thread_ctx_switch_wait_token,
jiffies_to_usecs(hdev->timeout_jiffies), jiffies_to_usecs(hdev->timeout_jiffies),
&tmp); &tmp);
if (rc || !tmp) { if (rc || !tmp) {
dev_err(hdev->dev, dev_err(hdev->dev,
"restore phase hasn't finished in time\n"); "context switch phase didn't finish in time\n");
rc = -ETIMEDOUT; rc = -ETIMEDOUT;
goto out; goto out;
} }
......
...@@ -106,8 +106,8 @@ int hl_ctx_init(struct hl_device *hdev, struct hl_ctx *ctx, bool is_kernel_ctx) ...@@ -106,8 +106,8 @@ int hl_ctx_init(struct hl_device *hdev, struct hl_ctx *ctx, bool is_kernel_ctx)
ctx->cs_sequence = 1; ctx->cs_sequence = 1;
spin_lock_init(&ctx->cs_lock); spin_lock_init(&ctx->cs_lock);
atomic_set(&ctx->thread_restore_token, 1); atomic_set(&ctx->thread_ctx_switch_token, 1);
ctx->thread_restore_wait_token = 0; ctx->thread_ctx_switch_wait_token = 0;
if (is_kernel_ctx) { if (is_kernel_ctx) {
ctx->asid = HL_KERNEL_ASID_ID; /* KMD gets ASID 0 */ ctx->asid = HL_KERNEL_ASID_ID; /* KMD gets ASID 0 */
......
...@@ -505,22 +505,97 @@ static ssize_t mmu_write(struct file *file, const char __user *buf, ...@@ -505,22 +505,97 @@ static ssize_t mmu_write(struct file *file, const char __user *buf,
return -EINVAL; return -EINVAL;
} }
static int device_va_to_pa(struct hl_device *hdev, u64 virt_addr,
u64 *phys_addr)
{
struct hl_ctx *ctx = hdev->user_ctx;
u64 hop_addr, hop_pte_addr, hop_pte;
int rc = 0;
if (!ctx) {
dev_err(hdev->dev, "no ctx available\n");
return -EINVAL;
}
mutex_lock(&ctx->mmu_lock);
/* hop 0 */
hop_addr = get_hop0_addr(ctx);
hop_pte_addr = get_hop0_pte_addr(ctx, hop_addr, virt_addr);
hop_pte = hdev->asic_funcs->read_pte(hdev, hop_pte_addr);
/* hop 1 */
hop_addr = get_next_hop_addr(hop_pte);
if (hop_addr == ULLONG_MAX)
goto not_mapped;
hop_pte_addr = get_hop1_pte_addr(ctx, hop_addr, virt_addr);
hop_pte = hdev->asic_funcs->read_pte(hdev, hop_pte_addr);
/* hop 2 */
hop_addr = get_next_hop_addr(hop_pte);
if (hop_addr == ULLONG_MAX)
goto not_mapped;
hop_pte_addr = get_hop2_pte_addr(ctx, hop_addr, virt_addr);
hop_pte = hdev->asic_funcs->read_pte(hdev, hop_pte_addr);
/* hop 3 */
hop_addr = get_next_hop_addr(hop_pte);
if (hop_addr == ULLONG_MAX)
goto not_mapped;
hop_pte_addr = get_hop3_pte_addr(ctx, hop_addr, virt_addr);
hop_pte = hdev->asic_funcs->read_pte(hdev, hop_pte_addr);
if (!(hop_pte & LAST_MASK)) {
/* hop 4 */
hop_addr = get_next_hop_addr(hop_pte);
if (hop_addr == ULLONG_MAX)
goto not_mapped;
hop_pte_addr = get_hop4_pte_addr(ctx, hop_addr, virt_addr);
hop_pte = hdev->asic_funcs->read_pte(hdev, hop_pte_addr);
}
if (!(hop_pte & PAGE_PRESENT_MASK))
goto not_mapped;
*phys_addr = (hop_pte & PTE_PHYS_ADDR_MASK) | (virt_addr & OFFSET_MASK);
goto out;
not_mapped:
dev_err(hdev->dev, "virt addr 0x%llx is not mapped to phys addr\n",
virt_addr);
rc = -EINVAL;
out:
mutex_unlock(&ctx->mmu_lock);
return rc;
}
static ssize_t hl_data_read32(struct file *f, char __user *buf, static ssize_t hl_data_read32(struct file *f, char __user *buf,
size_t count, loff_t *ppos) size_t count, loff_t *ppos)
{ {
struct hl_dbg_device_entry *entry = file_inode(f)->i_private; struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
struct hl_device *hdev = entry->hdev; struct hl_device *hdev = entry->hdev;
struct asic_fixed_properties *prop = &hdev->asic_prop;
char tmp_buf[32]; char tmp_buf[32];
u64 addr = entry->addr;
u32 val; u32 val;
ssize_t rc; ssize_t rc;
if (*ppos) if (*ppos)
return 0; return 0;
rc = hdev->asic_funcs->debugfs_read32(hdev, entry->addr, &val); if (addr >= prop->va_space_dram_start_address &&
addr < prop->va_space_dram_end_address &&
hdev->mmu_enable &&
hdev->dram_supports_virtual_memory) {
rc = device_va_to_pa(hdev, entry->addr, &addr);
if (rc)
return rc;
}
rc = hdev->asic_funcs->debugfs_read32(hdev, addr, &val);
if (rc) { if (rc) {
dev_err(hdev->dev, "Failed to read from 0x%010llx\n", dev_err(hdev->dev, "Failed to read from 0x%010llx\n", addr);
entry->addr);
return rc; return rc;
} }
...@@ -536,6 +611,8 @@ static ssize_t hl_data_write32(struct file *f, const char __user *buf, ...@@ -536,6 +611,8 @@ static ssize_t hl_data_write32(struct file *f, const char __user *buf,
{ {
struct hl_dbg_device_entry *entry = file_inode(f)->i_private; struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
struct hl_device *hdev = entry->hdev; struct hl_device *hdev = entry->hdev;
struct asic_fixed_properties *prop = &hdev->asic_prop;
u64 addr = entry->addr;
u32 value; u32 value;
ssize_t rc; ssize_t rc;
...@@ -543,10 +620,19 @@ static ssize_t hl_data_write32(struct file *f, const char __user *buf, ...@@ -543,10 +620,19 @@ static ssize_t hl_data_write32(struct file *f, const char __user *buf,
if (rc) if (rc)
return rc; return rc;
rc = hdev->asic_funcs->debugfs_write32(hdev, entry->addr, value); if (addr >= prop->va_space_dram_start_address &&
addr < prop->va_space_dram_end_address &&
hdev->mmu_enable &&
hdev->dram_supports_virtual_memory) {
rc = device_va_to_pa(hdev, entry->addr, &addr);
if (rc)
return rc;
}
rc = hdev->asic_funcs->debugfs_write32(hdev, addr, value);
if (rc) { if (rc) {
dev_err(hdev->dev, "Failed to write 0x%08x to 0x%010llx\n", dev_err(hdev->dev, "Failed to write 0x%08x to 0x%010llx\n",
value, entry->addr); value, addr);
return rc; return rc;
} }
......
This diff is collapsed.
This diff is collapsed.
subdir-ccflags-y += -I$(src) subdir-ccflags-y += -I$(src)
HL_GOYA_FILES := goya/goya.o goya/goya_security.o goya/goya_hwmgr.o HL_GOYA_FILES := goya/goya.o goya/goya_security.o goya/goya_hwmgr.o \
goya/goya_coresight.o
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -188,4 +188,3 @@ ...@@ -188,4 +188,3 @@
#define CPU_CA53_CFG_ARM_PMU_EVENT_MASK 0x3FFFFFFF #define CPU_CA53_CFG_ARM_PMU_EVENT_MASK 0x3FFFFFFF
#endif /* ASIC_REG_CPU_CA53_CFG_MASKS_H_ */ #endif /* ASIC_REG_CPU_CA53_CFG_MASKS_H_ */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment