Commit f680df51 authored by Dave Airlie's avatar Dave Airlie

Merge tag 'drm-misc-next-2024-05-30' of...

Merge tag 'drm-misc-next-2024-05-30' of https://gitlab.freedesktop.org/drm/misc/kernel into drm-next

drm-misc-next for 6.11:

UAPI Changes:
  - Deprecate DRM date and return a 0 date in DRM_IOCTL_VERSION

Core Changes:
  - connector: Create a set of helpers to help with HDMI support
  - fbdev: Create memory manager optimized fbdev emulation
  - panic: Allow to select fonts, improve drm_fb_dma_get_scanout_buffer

Driver Changes:
  - Remove driver owner assignments
  - Allow more drivers to compile with COMPILE_TEST
  - Conversions to drm_edid
  - ivpu: hardware scheduler support, profiling support, improvements
    to the platform support layer
  - mgag200: general reworks and improvements
  - nouveau: Add NVreg_RegistryDwords command line option
  - rockchip: Conversion to the hdmi helpers
  - sun4i: Conversion to the hdmi helpers
  - vc4: Conversion to the hdmi helpers
  - v3d: Perf counters improvements
  - zynqmp: IRQ and debugfs improvements
  - bridge:
    - Remove redundant checks on bridge->encoder
  - panels:
    - Switch panels from register table initialization to proper code
    - Now that the panel code tracks the panel state, remove every
      ad-hoc implementation in the panel drivers
    - New panels: Lincoln Tech Sol LCD185-101CT, Microtips Technology
      13-101HIEBCAF0-C, Microtips Technology MF-103HIEB0GA0, BOE
      nv110wum-l60, IVO t109nw41
Signed-off-by: default avatarDave Airlie <airlied@redhat.com>

From: Maxime Ripard <mripard@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240530-hilarious-flat-magpie-5fa186@houat
parents 1ddaaa24 0c02cebc
......@@ -32,8 +32,6 @@ properties:
- innolux,hj110iz-01a
# STARRY 2081101QFH032011-53G 10.1" WUXGA TFT LCD panel
- starry,2081101qfh032011-53g
# STARRY himax83102-j02 10.51" WUXGA TFT LCD panel
- starry,himax83102-j02
# STARRY ili9882t 10.51" WUXGA TFT LCD panel
- starry,ili9882t
......
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/himax,hx83102.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Himax HX83102 MIPI-DSI LCD panel controller
maintainers:
- Cong Yang <yangcong5@huaqin.corp-partner.google.com>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
items:
- enum:
# Boe nv110wum-l60 11.0" WUXGA TFT LCD panel
- boe,nv110wum-l60
# IVO t109nw41 11.0" WUXGA TFT LCD panel
- ivo,t109nw41
# STARRY himax83102-j02 10.51" WUXGA TFT LCD panel
- starry,himax83102-j02
- const: himax,hx83102
reg:
description: the virtual channel number of a DSI peripheral
enable-gpios:
description: a GPIO spec for the enable pin
pp1800-supply:
description: core voltage supply
avdd-supply:
description: phandle of the regulator that provides positive voltage
avee-supply:
description: phandle of the regulator that provides negative voltage
backlight: true
port: true
rotation: true
required:
- compatible
- reg
- enable-gpios
- pp1800-supply
- avdd-supply
- avee-supply
additionalProperties: false
examples:
- |
dsi {
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "starry,himax83102-j02", "himax,hx83102";
reg = <0>;
enable-gpios = <&pio 45 0>;
avdd-supply = <&ppvarn_lcd>;
avee-supply = <&ppvarp_lcd>;
pp1800-supply = <&pp1800_lcd>;
backlight = <&backlight_lcd0>;
port {
panel_in: endpoint {
remote-endpoint = <&dsi_out>;
};
};
};
};
...
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/panel-edp-legacy.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Legacy eDP panels from before the "edp-panel" compatible
maintainers:
- Douglas Anderson <dianders@chromium.org>
description: |
This binding file is a collection of eDP panels from before the generic
"edp-panel" compatible was introduced. It is kept around to support old
dts files. The only reason one might add a new panel here instead of using
the generic "edp-panel" is if it needed to be used on an eDP controller
that doesn't support the generic "edp-panel" compatible, but it should be
a strong preference to add the generic "edp-panel" compatible instead.
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
enum:
# compatible must be listed in alphabetical order, ordered by compatible.
# The description in the comment is mandatory for each compatible.
# AU Optronics Corporation 10.1" WSVGA TFT LCD panel
- auo,b101ean01
# AUO B116XAK01 eDP TFT LCD panel
- auo,b116xa01
# AU Optronics Corporation 13.3" FHD (1920x1080) color TFT-LCD panel
- auo,b133han05
# AU Optronics Corporation 13.3" FHD (1920x1080) color TFT-LCD panel
- auo,b133htn01
# AU Optronics Corporation 13.3" WXGA (1366x768) TFT LCD panel
- auo,b133xtn01
# AU Optronics Corporation 14.0" FHD (1920x1080) color TFT-LCD panel
- auo,b140han06
# BOE OPTOELECTRONICS TECHNOLOGY 10.1" WXGA TFT LCD panel
- boe,nv101wxmn51
# BOE NV133FHM-N61 13.3" FHD (1920x1080) TFT LCD Panel
- boe,nv110wtm-n61
# BOE NV110WTM-N61 11.0" 2160x1440 TFT LCD Panel
- boe,nv133fhm-n61
# BOE NV133FHM-N62 13.3" FHD (1920x1080) TFT LCD Panel
- boe,nv133fhm-n62
# BOE NV140FHM-N49 14.0" FHD a-Si FT panel
- boe,nv140fhmn49
# Innolux Corporation 11.6" WXGA (1366x768) TFT LCD panel
- innolux,n116bca-ea1
# Innolux Corporation 11.6" WXGA (1366x768) TFT LCD panel
- innolux,n116bge
# InnoLux 13.3" FHD (1920x1080) eDP TFT LCD panel
- innolux,n125hce-gn1
# Innolux P120ZDG-BF1 12.02 inch eDP 2K display panel
- innolux,p120zdg-bf1
# InfoVision Optoelectronics M133NWF4 R0 13.3" FHD (1920x1080) TFT LCD panel
- ivo,m133nwf4-r0
# King & Display KD116N21-30NV-A010 eDP TFT LCD panel
- kingdisplay,kd116n21-30nv-a010
# LG LP079QX1-SP0V 7.9" (1536x2048 pixels) TFT LCD panel
- lg,lp079qx1-sp0v
# LG 9.7" (2048x1536 pixels) TFT LCD panel
- lg,lp097qx1-spa1
# LG 12.0" (1920x1280 pixels) TFT LCD panel
- lg,lp120up1
# LG 12.9" (2560x1700 pixels) TFT LCD panel
- lg,lp129qe
# NewEast Optoelectronics CO., LTD WJFH116008A eDP TFT LCD panel
- neweast,wjfh116008a
# Samsung 12.2" (2560x1600 pixels) TFT LCD panel
- samsung,lsn122dl01-c01
# Samsung Electronics 14" WXGA (1366x768) TFT LCD panel
- samsung,ltn140at29-301
# Sharp LD-D5116Z01B 12.3" WUXGA+ eDP panel
- sharp,ld-d5116z01b
# Sharp 12.3" (2400x1600 pixels) TFT LCD panel
- sharp,lq123p1jx31
# Sharp 14" (1920x1080 pixels) TFT LCD panel
- sharp,lq140m1jw46
# Starry 12.2" (1920x1200 pixels) TFT LCD panel
- starry,kr122ea0sra
backlight: true
ddc-i2c-bus: true
enable-gpios: true
panel-timing: true
port: true
power-supply: true
no-hpd: true
hpd-gpios: true
additionalProperties: false
required:
- compatible
- power-supply
examples:
- |
panel: panel {
compatible = "innolux,n116bge";
power-supply = <&panel_regulator>;
backlight = <&backlight>;
panel-timing {
clock-frequency = <74250000>;
hactive = <1366>;
hfront-porch = <136>;
hback-porch = <60>;
hsync-len = <30>;
hsync-active = <0>;
vactive = <768>;
vfront-porch = <8>;
vback-porch = <12>;
vsync-len = <12>;
vsync-active = <0>;
};
port {
panel_in_edp: endpoint {
remote-endpoint = <&edp_out_panel>;
};
};
};
......@@ -41,6 +41,12 @@ properties:
- auo,g190ean01
# Kaohsiung Opto-Electronics Inc. 10.1" WUXGA (1920 x 1200) LVDS TFT LCD panel
- koe,tx26d202vm0bwa
# Lincoln Technology Solutions, LCD185-101CT 10.1" TFT 1920x1200
- lincolntech,lcd185-101ct
# Microtips Technology MF-101HIEBCAF0 10.1" WUXGA (1920x1200) TFT LCD panel
- microtips,mf-101hiebcaf0
# Microtips Technology MF-103HIEB0GA0 10.25" 1920x720 TFT LCD panel
- microtips,mf-103hieb0ga0
# NLT Technologies, Ltd. 15.6" FHD (1920x1080) LVDS TFT LCD panel
- nlt,nl192108ac18-02d
......
......@@ -41,22 +41,10 @@ properties:
- ampire,am800600p5tmqw-tb8h
# AU Optronics Corporation 10.1" WSVGA TFT LCD panel
- auo,b101aw03
# AU Optronics Corporation 10.1" WSVGA TFT LCD panel
- auo,b101ean01
# AU Optronics Corporation 10.1" WXGA TFT LCD panel
- auo,b101xtn01
# AUO B116XAK01 eDP TFT LCD panel
- auo,b116xa01
# AU Optronics Corporation 11.6" HD (1366x768) color TFT-LCD panel
- auo,b116xw03
# AU Optronics Corporation 13.3" FHD (1920x1080) color TFT-LCD panel
- auo,b133han05
# AU Optronics Corporation 13.3" FHD (1920x1080) color TFT-LCD panel
- auo,b133htn01
# AU Optronics Corporation 13.3" WXGA (1366x768) TFT LCD panel
- auo,b133xtn01
# AU Optronics Corporation 14.0" FHD (1920x1080) color TFT-LCD panel
- auo,b140han06
# AU Optronics Corporation 7.0" FHD (800 x 480) TFT LCD panel
- auo,g070vvn01
# AU Optronics Corporation 10.1" (1280x800) color TFT LCD panel
......@@ -81,16 +69,6 @@ properties:
- boe,ev121wxm-n10-1850
# BOE HV070WSA-100 7.01" WSVGA TFT LCD panel
- boe,hv070wsa-100
# BOE OPTOELECTRONICS TECHNOLOGY 10.1" WXGA TFT LCD panel
- boe,nv101wxmn51
# BOE NV133FHM-N61 13.3" FHD (1920x1080) TFT LCD Panel
- boe,nv110wtm-n61
# BOE NV110WTM-N61 11.0" 2160x1440 TFT LCD Panel
- boe,nv133fhm-n61
# BOE NV133FHM-N62 13.3" FHD (1920x1080) TFT LCD Panel
- boe,nv133fhm-n62
# BOE NV140FHM-N49 14.0" FHD a-Si FT panel
- boe,nv140fhmn49
# Crystal Clear Technology CMT430B19N00 4.3" 480x272 TFT-LCD panel
- cct,cmt430b19n00
# CDTech(H.K.) Electronics Limited 4.3" 480x272 color TFT-LCD panel
......@@ -172,8 +150,6 @@ properties:
- hannstar,hsd100pxn1
# Hitachi Ltd. Corporation 9" WVGA (800x480) TFT LCD panel
- hit,tx23d38vm0caa
# InfoVision Optoelectronics M133NWF4 R0 13.3" FHD (1920x1080) TFT LCD panel
- ivo,m133nwf4-r0
# Innolux AT043TN24 4.3" WQVGA TFT LCD panel
- innolux,at043tn24
# Innolux AT070TN92 7.0" WQVGA TFT LCD panel
......@@ -192,22 +168,12 @@ properties:
- innolux,g121x1-l03
# Innolux Corporation 12.1" G121XCE-L01 XGA (1024x768) TFT LCD panel
- innolux,g121xce-l01
# Innolux Corporation 11.6" WXGA (1366x768) TFT LCD panel
- innolux,n116bca-ea1
# Innolux Corporation 11.6" WXGA (1366x768) TFT LCD panel
- innolux,n116bge
# InnoLux 13.3" FHD (1920x1080) eDP TFT LCD panel
- innolux,n125hce-gn1
# InnoLux 15.6" FHD (1920x1080) TFT LCD panel
- innolux,g156hce-l01
# InnoLux 15.6" WXGA TFT LCD panel
- innolux,n156bge-l21
# Innolux P120ZDG-BF1 12.02 inch eDP 2K display panel
- innolux,p120zdg-bf1
# Innolux Corporation 7.0" WSVGA (1024x600) TFT LCD panel
- innolux,zj070na-01p
# King & Display KD116N21-30NV-A010 eDP TFT LCD panel
- kingdisplay,kd116n21-30nv-a010
# Kaohsiung Opto-Electronics Inc. 5.7" QVGA (320 x 240) TFT LCD panel
- koe,tx14d24vm1bpa
# Kaohsiung Opto-Electronics. TX31D200VM0BAA 12.3" HSXGA LVDS panel
......@@ -220,14 +186,6 @@ properties:
- lemaker,bl035-rgb-002
# LG 7" (800x480 pixels) TFT LCD panel
- lg,lb070wv8
# LG LP079QX1-SP0V 7.9" (1536x2048 pixels) TFT LCD panel
- lg,lp079qx1-sp0v
# LG 9.7" (2048x1536 pixels) TFT LCD panel
- lg,lp097qx1-spa1
# LG 12.0" (1920x1280 pixels) TFT LCD panel
- lg,lp120up1
# LG 12.9" (2560x1700 pixels) TFT LCD panel
- lg,lp129qe
# Logic Technologies LT161010-2NHC 7" WVGA TFT Cap Touch Module
- logictechno,lt161010-2nhc
# Logic Technologies LT161010-2NHR 7" WVGA TFT Resistive Touch Module
......@@ -254,8 +212,6 @@ properties:
- nec,nl4827hc19-05b
# Netron-DY E231732 7.0" WSVGA TFT LCD panel
- netron-dy,e231732
# NewEast Optoelectronics CO., LTD WJFH116008A eDP TFT LCD panel
- neweast,wjfh116008a
# Newhaven Display International 480 x 272 TFT LCD panel
- newhaven,nhd-4.3-480272ef-atxl
# New Vision Display 7.0" 800 RGB x 480 TFT LCD panel
......@@ -290,16 +246,10 @@ properties:
- rocktech,rk070er9427
# Rocktech Display Ltd. RK043FN48H 4.3" 480x272 LCD-TFT panel
- rocktech,rk043fn48h
# Samsung 13.3" FHD (1920x1080 pixels) eDP AMOLED panel
- samsung,atna33xc20
# Samsung 12.2" (2560x1600 pixels) TFT LCD panel
- samsung,lsn122dl01-c01
# Samsung Electronics 10.1" WXGA (1280x800) TFT LCD panel
- samsung,ltl101al01
# Samsung Electronics 10.1" WSVGA TFT LCD panel
- samsung,ltn101nt05
# Samsung Electronics 14" WXGA (1366x768) TFT LCD panel
- samsung,ltn140at29-301
# Satoz SAT050AT40H12R2 5.0" WVGA TFT LCD panel
- satoz,sat050at40h12r2
# Sharp LQ035Q7DB03 3.5" QVGA TFT LCD panel
......@@ -308,18 +258,12 @@ properties:
- sharp,lq070y3dg3b
# Sharp Display Corp. LQ101K1LY04 10.07" WXGA TFT LCD panel
- sharp,lq101k1ly04
# Sharp 12.3" (2400x1600 pixels) TFT LCD panel
- sharp,lq123p1jx31
# Sharp 14" (1920x1080 pixels) TFT LCD panel
- sharp,lq140m1jw46
# Sharp LS020B1DD01D 2.0" HQVGA TFT LCD panel
- sharp,ls020b1dd01d
# Shelly SCA07010-BFN-LNN 7.0" WVGA TFT LCD panel
- shelly,sca07010-bfn-lnn
# Starry KR070PE2T 7" WVGA TFT LCD panel
- starry,kr070pe2t
# Starry 12.2" (1920x1200 pixels) TFT LCD panel
- starry,kr122ea0sra
# Startek KD070WVFPA043-C069A 7" TFT LCD panel
- startek,kd070wvfpa
# Team Source Display Technology TST043015CMHX 4.3" WQVGA TFT LCD panel
......
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/samsung,atna33xc20.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Samsung 13.3" FHD (1920x1080 pixels) eDP AMOLED panel
maintainers:
- Douglas Anderson <dianders@chromium.org>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
const: samsung,atna33xc20
enable-gpios: true
port: true
power-supply: true
no-hpd: true
hpd-gpios: true
additionalProperties: false
required:
- compatible
- enable-gpios
- power-supply
examples:
- |
#include <dt-bindings/clock/qcom,rpmh.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/irq.h>
i2c {
#address-cells = <1>;
#size-cells = <0>;
bridge@2d {
compatible = "ti,sn65dsi86";
reg = <0x2d>;
interrupt-parent = <&tlmm>;
interrupts = <10 IRQ_TYPE_LEVEL_HIGH>;
enable-gpios = <&tlmm 102 GPIO_ACTIVE_HIGH>;
vpll-supply = <&src_pp1800_s4a>;
vccio-supply = <&src_pp1800_s4a>;
vcca-supply = <&src_pp1200_l2a>;
vcc-supply = <&src_pp1200_l2a>;
clocks = <&rpmhcc RPMH_LN_BB_CLK2>;
clock-names = "refclk";
no-hpd;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
endpoint {
remote-endpoint = <&dsi0_out>;
};
};
port@1 {
reg = <1>;
sn65dsi86_out: endpoint {
remote-endpoint = <&panel_in_edp>;
};
};
};
aux-bus {
panel {
compatible = "samsung,atna33xc20";
enable-gpios = <&tlmm 12 GPIO_ACTIVE_HIGH>;
power-supply = <&pp3300_dx_edp>;
hpd-gpios = <&sn65dsi86_bridge 2 GPIO_ACTIVE_HIGH>;
port {
panel_in_edp: endpoint {
remote-endpoint = <&sn65dsi86_out>;
};
};
};
};
};
};
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/sharp,ld-d5116z01b.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Sharp LD-D5116Z01B 12.3" WUXGA+ eDP panel
maintainers:
- Jeffrey Hugo <jeffrey.l.hugo@gmail.com>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
const: sharp,ld-d5116z01b
power-supply: true
backlight: true
port: true
no-hpd: true
additionalProperties: false
required:
- compatible
- power-supply
...
......@@ -15,6 +15,7 @@ properties:
items:
- enum:
- rockchip,px30-mipi-dsi
- rockchip,rk3128-mipi-dsi
- rockchip,rk3288-mipi-dsi
- rockchip,rk3399-mipi-dsi
- rockchip,rk3568-mipi-dsi
......@@ -77,6 +78,7 @@ allOf:
contains:
enum:
- rockchip,px30-mipi-dsi
- rockchip,rk3128-mipi-dsi
- rockchip,rk3568-mipi-dsi
- rockchip,rv1126-mipi-dsi
......
......@@ -820,6 +820,8 @@ patternProperties:
description: Lichee Pi
"^linaro,.*":
description: Linaro Limited
"^lincolntech,.*":
description: Lincoln Technology Solutions
"^lineartechnology,.*":
description: Linear Technology
"^linksprite,.*":
......@@ -924,6 +926,8 @@ patternProperties:
description: Microsoft Corporation
"^microsys,.*":
description: MicroSys Electronics GmbH
"^microtips,.*":
description: Microtips Technology USA
"^mikroe,.*":
description: MikroElektronika d.o.o.
"^mikrotik,.*":
......
......@@ -57,8 +57,8 @@ is larger than the driver minor, the DRM_IOCTL_SET_VERSION call will
return an error. Otherwise the driver's set_version() method will be
called with the requested version.
Name, Description and Date
~~~~~~~~~~~~~~~~~~~~~~~~~~
Name and Description
~~~~~~~~~~~~~~~~~~~~
char \*name; char \*desc; char \*date;
The driver name is printed to the kernel log at initialization time,
......@@ -69,12 +69,6 @@ The driver description is a purely informative string passed to
userspace through the DRM_IOCTL_VERSION ioctl and otherwise unused by
the kernel.
The driver date, formatted as YYYYMMDD, is meant to identify the date of
the latest modification to the driver. However, as most drivers fail to
update it, its value is mostly useless. The DRM core prints it to the
kernel log at initialization time and passes it to userspace through the
DRM_IOCTL_VERSION ioctl.
Module Initialization
---------------------
......
......@@ -110,15 +110,21 @@ fbdev Helper Functions Reference
.. kernel-doc:: drivers/gpu/drm/drm_fb_helper.c
:doc: fbdev helpers
.. kernel-doc:: drivers/gpu/drm/drm_fbdev_dma.c
:export:
.. kernel-doc:: drivers/gpu/drm/drm_fbdev_shmem.c
:export:
.. kernel-doc:: drivers/gpu/drm/drm_fbdev_ttm.c
:export:
.. kernel-doc:: include/drm/drm_fb_helper.h
:internal:
.. kernel-doc:: drivers/gpu/drm/drm_fb_helper.c
:export:
.. kernel-doc:: drivers/gpu/drm/drm_fbdev_generic.c
:export:
format Helper Functions Reference
=================================
......
......@@ -17,7 +17,6 @@ Owner Module/Drivers,Group,Property Name,Type,Property Values,Object attached,De
,Virtual GPU,“suggested X”,RANGE,"Min=0, Max=0xffffffff",Connector,property to suggest an X offset for a connector
,,“suggested Y”,RANGE,"Min=0, Max=0xffffffff",Connector,property to suggest an Y offset for a connector
,Optional,"""aspect ratio""",ENUM,"{ ""None"", ""4:3"", ""16:9"" }",Connector,TDB
i915,Generic,"""Broadcast RGB""",ENUM,"{ ""Automatic"", ""Full"", ""Limited 16:235"" }",Connector,"When this property is set to Limited 16:235 and CTM is set, the hardware will be programmed with the result of the multiplication of CTM by the limited range matrix to ensure the pixels normally in the range 0..1.0 are remapped to the range 16/255..235/255."
,,“audio”,ENUM,"{ ""force-dvi"", ""off"", ""auto"", ""on"" }",Connector,TBD
,SDVO-TV,“mode”,ENUM,"{ ""NTSC_M"", ""NTSC_J"", ""NTSC_443"", ""PAL_B"" } etc.",Connector,TBD
,,"""left_margin""",RANGE,"Min=0, Max= SDVO dependent",Connector,TBD
......@@ -38,7 +37,6 @@ i915,Generic,"""Broadcast RGB""",ENUM,"{ ""Automatic"", ""Full"", ""Limited 16:2
,,“dot_crawl”,RANGE,"Min=0, Max=1",Connector,TBD
,SDVO-TV/LVDS,“brightness”,RANGE,"Min=0, Max= SDVO dependent",Connector,TBD
CDV gma-500,Generic,"""Broadcast RGB""",ENUM,"{ “Full”, “Limited 16:235” }",Connector,TBD
,,"""Broadcast RGB""",ENUM,"{ “off”, “auto”, “on” }",Connector,TBD
Poulsbo,Generic,“backlight”,RANGE,"Min=0, Max=100",Connector,TBD
,SDVO-TV,“mode”,ENUM,"{ ""NTSC_M"", ""NTSC_J"", ""NTSC_443"", ""PAL_B"" } etc.",Connector,TBD
,,"""left_margin""",RANGE,"Min=0, Max= SDVO dependent",Connector,TBD
......
......@@ -243,19 +243,6 @@ Contact: Maintainer of the driver you plan to convert
Level: Intermediate
Convert drivers to use drm_fbdev_generic_setup()
------------------------------------------------
Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
expected the framebuffer in system memory or system-like memory. By employing
struct iosys_map, drivers with frambuffers in I/O memory can be supported
as well.
Contact: Maintainer of the driver you plan to convert
Level: Intermediate
Reimplement functions in drm_fbdev_fb_ops without fbdev
-------------------------------------------------------
......@@ -507,6 +494,24 @@ Contact: Douglas Anderson <dianders@chromium.org>
Level: Starter/Intermediate
Transition away from using mipi_dsi_*_write_seq()
-------------------------------------------------
The macros mipi_dsi_generic_write_seq() and mipi_dsi_dcs_write_seq() are
non-intuitive because, if there are errors, they return out of the *caller's*
function. We should move all callers to use mipi_dsi_generic_write_seq_multi()
and mipi_dsi_dcs_write_seq_multi() macros instead.
Once all callers are transitioned, the macros and the functions that they call,
mipi_dsi_generic_write_chatty() and mipi_dsi_dcs_write_buffer_chatty(), can
probably be removed. Alternatively, if people feel like the _multi() variants
are overkill for some use cases, we could keep the mipi_dsi_*_write_seq()
variants but change them not to return out of the caller.
Contact: Douglas Anderson <dianders@chromium.org>
Level: Starter
Core refactorings
=================
......
......@@ -6909,7 +6909,7 @@ DRM DRIVER FOR LG SW43408 PANELS
M: Sumit Semwal <sumit.semwal@linaro.org>
M: Caleb Connolly <caleb.connolly@linaro.org>
S: Maintained
T: git git://anongit.freedesktop.org/drm/drm-misc
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
F: Documentation/devicetree/bindings/display/panel/lg,sw43408.yaml
F: drivers/gpu/drm/panel/panel-lg-sw43408.c
......
# SPDX-License-Identifier: GPL-2.0-only
# Copyright (C) 2023 Intel Corporation
# Copyright (C) 2023-2024 Intel Corporation
intel_vpu-y := \
ivpu_drv.o \
ivpu_fw.o \
ivpu_fw_log.o \
ivpu_gem.o \
ivpu_hw_37xx.o \
ivpu_hw_40xx.o \
ivpu_hw.o \
ivpu_hw_btrs.o \
ivpu_hw_ip.o \
ivpu_ipc.o \
ivpu_job.o \
ivpu_jsm_msg.o \
ivpu_mmu.o \
ivpu_mmu_context.o \
ivpu_pm.o
ivpu_ms.o \
ivpu_pm.o \
ivpu_sysfs.o
intel_vpu-$(CONFIG_DEBUG_FS) += ivpu_debugfs.o
......
......@@ -145,6 +145,30 @@ static const struct file_operations dvfs_mode_fops = {
.write = dvfs_mode_fops_write,
};
static ssize_t
fw_dyndbg_fops_write(struct file *file, const char __user *user_buf, size_t size, loff_t *pos)
{
struct ivpu_device *vdev = file->private_data;
char buffer[VPU_DYNDBG_CMD_MAX_LEN] = {};
int ret;
if (size >= VPU_DYNDBG_CMD_MAX_LEN)
return -EINVAL;
ret = strncpy_from_user(buffer, user_buf, size);
if (ret < 0)
return ret;
ivpu_jsm_dyndbg_control(vdev, buffer, size);
return size;
}
static const struct file_operations fw_dyndbg_fops = {
.owner = THIS_MODULE,
.open = simple_open,
.write = fw_dyndbg_fops_write,
};
static int fw_log_show(struct seq_file *s, void *v)
{
struct ivpu_device *vdev = s->private;
......@@ -335,6 +359,28 @@ static const struct file_operations ivpu_reset_engine_fops = {
.write = ivpu_reset_engine_fn,
};
static ssize_t
ivpu_resume_engine_fn(struct file *file, const char __user *user_buf, size_t size, loff_t *pos)
{
struct ivpu_device *vdev = file->private_data;
if (!size)
return -EINVAL;
if (ivpu_jsm_hws_resume_engine(vdev, DRM_IVPU_ENGINE_COMPUTE))
return -ENODEV;
if (ivpu_jsm_hws_resume_engine(vdev, DRM_IVPU_ENGINE_COPY))
return -ENODEV;
return size;
}
static const struct file_operations ivpu_resume_engine_fops = {
.owner = THIS_MODULE,
.open = simple_open,
.write = ivpu_resume_engine_fn,
};
void ivpu_debugfs_init(struct ivpu_device *vdev)
{
struct dentry *debugfs_root = vdev->drm.debugfs_root;
......@@ -347,6 +393,8 @@ void ivpu_debugfs_init(struct ivpu_device *vdev)
debugfs_create_file("dvfs_mode", 0200, debugfs_root, vdev,
&dvfs_mode_fops);
debugfs_create_file("fw_dyndbg", 0200, debugfs_root, vdev,
&fw_dyndbg_fops);
debugfs_create_file("fw_log", 0644, debugfs_root, vdev,
&fw_log_fops);
debugfs_create_file("fw_trace_destination_mask", 0200, debugfs_root, vdev,
......@@ -358,8 +406,10 @@ void ivpu_debugfs_init(struct ivpu_device *vdev)
debugfs_create_file("reset_engine", 0200, debugfs_root, vdev,
&ivpu_reset_engine_fops);
debugfs_create_file("resume_engine", 0200, debugfs_root, vdev,
&ivpu_resume_engine_fops);
if (ivpu_hw_gen(vdev) >= IVPU_HW_40XX)
if (ivpu_hw_ip_gen(vdev) >= IVPU_HW_IP_40XX)
debugfs_create_file("fw_profiling_freq_drive", 0200,
debugfs_root, vdev, &fw_profiling_freq_fops);
}
......@@ -26,7 +26,9 @@
#include "ivpu_jsm_msg.h"
#include "ivpu_mmu.h"
#include "ivpu_mmu_context.h"
#include "ivpu_ms.h"
#include "ivpu_pm.h"
#include "ivpu_sysfs.h"
#ifndef DRIVER_VERSION_STR
#define DRIVER_VERSION_STR __stringify(DRM_IVPU_DRIVER_MAJOR) "." \
......@@ -51,10 +53,18 @@ u8 ivpu_pll_max_ratio = U8_MAX;
module_param_named(pll_max_ratio, ivpu_pll_max_ratio, byte, 0644);
MODULE_PARM_DESC(pll_max_ratio, "Maximum PLL ratio used to set NPU frequency");
int ivpu_sched_mode;
module_param_named(sched_mode, ivpu_sched_mode, int, 0444);
MODULE_PARM_DESC(sched_mode, "Scheduler mode: 0 - Default scheduler, 1 - Force HW scheduler");
bool ivpu_disable_mmu_cont_pages;
module_param_named(disable_mmu_cont_pages, ivpu_disable_mmu_cont_pages, bool, 0644);
MODULE_PARM_DESC(disable_mmu_cont_pages, "Disable MMU contiguous pages optimization");
bool ivpu_force_snoop;
module_param_named(force_snoop, ivpu_force_snoop, bool, 0644);
MODULE_PARM_DESC(force_snoop, "Force snooping for NPU host memory access");
struct ivpu_file_priv *ivpu_file_priv_get(struct ivpu_file_priv *file_priv)
{
struct ivpu_device *vdev = file_priv->vdev;
......@@ -74,7 +84,6 @@ static void file_priv_unbind(struct ivpu_device *vdev, struct ivpu_file_priv *fi
ivpu_dbg(vdev, FILE, "file_priv unbind: ctx %u\n", file_priv->ctx.id);
ivpu_cmdq_release_all_locked(file_priv);
ivpu_jsm_context_release(vdev, file_priv->ctx.id);
ivpu_bo_unbind_all_bos_from_context(vdev, &file_priv->ctx);
ivpu_mmu_user_context_fini(vdev, &file_priv->ctx);
file_priv->bound = false;
......@@ -97,6 +106,7 @@ static void file_priv_release(struct kref *ref)
mutex_unlock(&vdev->context_list_lock);
pm_runtime_put_autosuspend(vdev->drm.dev);
mutex_destroy(&file_priv->ms_lock);
mutex_destroy(&file_priv->lock);
kfree(file_priv);
}
......@@ -119,7 +129,7 @@ static int ivpu_get_capabilities(struct ivpu_device *vdev, struct drm_ivpu_param
{
switch (args->index) {
case DRM_IVPU_CAP_METRIC_STREAMER:
args->value = 0;
args->value = 1;
break;
case DRM_IVPU_CAP_DMA_MEMORY_RANGE:
args->value = 1;
......@@ -228,10 +238,13 @@ static int ivpu_open(struct drm_device *dev, struct drm_file *file)
goto err_dev_exit;
}
INIT_LIST_HEAD(&file_priv->ms_instance_list);
file_priv->vdev = vdev;
file_priv->bound = true;
kref_init(&file_priv->ref);
mutex_init(&file_priv->lock);
mutex_init(&file_priv->ms_lock);
mutex_lock(&vdev->context_list_lock);
......@@ -260,6 +273,7 @@ static int ivpu_open(struct drm_device *dev, struct drm_file *file)
xa_erase_irq(&vdev->context_xa, ctx_id);
err_unlock:
mutex_unlock(&vdev->context_list_lock);
mutex_destroy(&file_priv->ms_lock);
mutex_destroy(&file_priv->lock);
kfree(file_priv);
err_dev_exit:
......@@ -275,6 +289,7 @@ static void ivpu_postclose(struct drm_device *dev, struct drm_file *file)
ivpu_dbg(vdev, FILE, "file_priv close: ctx %u process %s pid %d\n",
file_priv->ctx.id, current->comm, task_pid_nr(current));
ivpu_ms_cleanup(file_priv);
ivpu_file_priv_put(&file_priv);
}
......@@ -285,6 +300,10 @@ static const struct drm_ioctl_desc ivpu_drm_ioctls[] = {
DRM_IOCTL_DEF_DRV(IVPU_BO_INFO, ivpu_bo_info_ioctl, 0),
DRM_IOCTL_DEF_DRV(IVPU_SUBMIT, ivpu_submit_ioctl, 0),
DRM_IOCTL_DEF_DRV(IVPU_BO_WAIT, ivpu_bo_wait_ioctl, 0),
DRM_IOCTL_DEF_DRV(IVPU_METRIC_STREAMER_START, ivpu_ms_start_ioctl, 0),
DRM_IOCTL_DEF_DRV(IVPU_METRIC_STREAMER_GET_DATA, ivpu_ms_get_data_ioctl, 0),
DRM_IOCTL_DEF_DRV(IVPU_METRIC_STREAMER_STOP, ivpu_ms_stop_ioctl, 0),
DRM_IOCTL_DEF_DRV(IVPU_METRIC_STREAMER_GET_INFO, ivpu_ms_get_info_ioctl, 0),
};
static int ivpu_wait_for_ready(struct ivpu_device *vdev)
......@@ -301,7 +320,7 @@ static int ivpu_wait_for_ready(struct ivpu_device *vdev)
timeout = jiffies + msecs_to_jiffies(vdev->timeout.boot);
while (1) {
ivpu_ipc_irq_handler(vdev, NULL);
ivpu_ipc_irq_handler(vdev);
ret = ivpu_ipc_receive(vdev, &cons, &ipc_hdr, NULL, 0);
if (ret != -ETIMEDOUT || time_after_eq(jiffies, timeout))
break;
......@@ -323,6 +342,21 @@ static int ivpu_wait_for_ready(struct ivpu_device *vdev)
return ret;
}
static int ivpu_hw_sched_init(struct ivpu_device *vdev)
{
int ret = 0;
if (vdev->hw->sched_mode == VPU_SCHEDULING_MODE_HW) {
ret = ivpu_jsm_hws_setup_priority_bands(vdev);
if (ret) {
ivpu_err(vdev, "Failed to enable hw scheduler: %d", ret);
return ret;
}
}
return ret;
}
/**
* ivpu_boot() - Start VPU firmware
* @vdev: VPU device
......@@ -356,6 +390,10 @@ int ivpu_boot(struct ivpu_device *vdev)
enable_irq(vdev->irq);
ivpu_hw_irq_enable(vdev);
ivpu_ipc_enable(vdev);
if (ivpu_fw_is_cold_boot(vdev))
return ivpu_hw_sched_init(vdev);
return 0;
}
......@@ -411,8 +449,23 @@ static const struct drm_driver driver = {
static irqreturn_t ivpu_irq_thread_handler(int irq, void *arg)
{
struct ivpu_device *vdev = arg;
u8 irq_src;
return ivpu_ipc_irq_thread_handler(vdev);
if (kfifo_is_empty(&vdev->hw->irq.fifo))
return IRQ_NONE;
while (kfifo_get(&vdev->hw->irq.fifo, &irq_src)) {
switch (irq_src) {
case IVPU_HW_IRQ_SRC_IPC:
ivpu_ipc_irq_thread_handler(vdev);
break;
default:
ivpu_err_ratelimited(vdev, "Unknown IRQ source: %u\n", irq_src);
break;
}
}
return IRQ_HANDLED;
}
static int ivpu_irq_init(struct ivpu_device *vdev)
......@@ -426,9 +479,11 @@ static int ivpu_irq_init(struct ivpu_device *vdev)
return ret;
}
ivpu_irq_handlers_init(vdev);
vdev->irq = pci_irq_vector(pdev, 0);
ret = devm_request_threaded_irq(vdev->drm.dev, vdev->irq, vdev->hw->ops->irq_handler,
ret = devm_request_threaded_irq(vdev->drm.dev, vdev->irq, ivpu_hw_irq_handler,
ivpu_irq_thread_handler, IRQF_NO_AUTOEN, DRIVER_NAME, vdev);
if (ret)
ivpu_err(vdev, "Failed to request an IRQ %d\n", ret);
......@@ -505,13 +560,10 @@ static int ivpu_dev_init(struct ivpu_device *vdev)
if (!vdev->pm)
return -ENOMEM;
if (ivpu_hw_gen(vdev) >= IVPU_HW_40XX) {
vdev->hw->ops = &ivpu_hw_40xx_ops;
if (ivpu_hw_ip_gen(vdev) >= IVPU_HW_IP_40XX)
vdev->hw->dma_bits = 48;
} else {
vdev->hw->ops = &ivpu_hw_37xx_ops;
else
vdev->hw->dma_bits = 38;
}
vdev->platform = IVPU_PLATFORM_INVALID;
vdev->context_xa_limit.min = IVPU_USER_CONTEXT_MIN_SSID;
......@@ -540,7 +592,7 @@ static int ivpu_dev_init(struct ivpu_device *vdev)
goto err_xa_destroy;
/* Init basic HW info based on buttress registers which are accessible before power up */
ret = ivpu_hw_info_init(vdev);
ret = ivpu_hw_init(vdev);
if (ret)
goto err_xa_destroy;
......@@ -616,6 +668,7 @@ static void ivpu_dev_fini(struct ivpu_device *vdev)
ivpu_prepare_for_reset(vdev);
ivpu_shutdown(vdev);
ivpu_ms_cleanup_all(vdev);
ivpu_jobs_abort_all(vdev);
ivpu_job_done_consumer_fini(vdev);
ivpu_pm_cancel_recovery(vdev);
......@@ -658,6 +711,7 @@ static int ivpu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
return ret;
ivpu_debugfs_init(vdev);
ivpu_sysfs_init(vdev);
ret = drm_dev_register(&vdev->drm, 0);
if (ret) {
......
......@@ -27,8 +27,13 @@
#define PCI_DEVICE_ID_ARL 0xad1d
#define PCI_DEVICE_ID_LNL 0x643e
#define IVPU_HW_37XX 37
#define IVPU_HW_40XX 40
#define IVPU_HW_IP_37XX 37
#define IVPU_HW_IP_40XX 40
#define IVPU_HW_IP_50XX 50
#define IVPU_HW_IP_60XX 60
#define IVPU_HW_BTRS_MTL 1
#define IVPU_HW_BTRS_LNL 2
#define IVPU_GLOBAL_CONTEXT_MMU_SSID 0
/* SSID 1 is used by the VPU to represent reserved context */
......@@ -39,7 +44,11 @@
#define IVPU_MIN_DB 1
#define IVPU_MAX_DB 255
#define IVPU_NUM_ENGINES 2
#define IVPU_NUM_ENGINES 2
#define IVPU_NUM_PRIORITIES 4
#define IVPU_NUM_CMDQS_PER_CTX (IVPU_NUM_ENGINES * IVPU_NUM_PRIORITIES)
#define IVPU_CMDQ_INDEX(engine, priority) ((engine) * IVPU_NUM_PRIORITIES + (priority))
#define IVPU_PLATFORM_SILICON 0
#define IVPU_PLATFORM_SIMICS 2
......@@ -131,6 +140,9 @@ struct ivpu_device {
atomic64_t unique_id_counter;
ktime_t busy_start_ts;
ktime_t busy_time;
struct {
int boot;
int jsm;
......@@ -149,8 +161,11 @@ struct ivpu_file_priv {
struct kref ref;
struct ivpu_device *vdev;
struct mutex lock; /* Protects cmdq */
struct ivpu_cmdq *cmdq[IVPU_NUM_ENGINES];
struct ivpu_cmdq *cmdq[IVPU_NUM_CMDQS_PER_CTX];
struct ivpu_mmu_context ctx;
struct mutex ms_lock; /* Protects ms_instance_list, ms_info_bo */
struct list_head ms_instance_list;
struct ivpu_bo *ms_info_bo;
bool has_mmu_faults;
bool bound;
};
......@@ -158,13 +173,17 @@ struct ivpu_file_priv {
extern int ivpu_dbg_mask;
extern u8 ivpu_pll_min_ratio;
extern u8 ivpu_pll_max_ratio;
extern int ivpu_sched_mode;
extern bool ivpu_disable_mmu_cont_pages;
extern bool ivpu_force_snoop;
#define IVPU_TEST_MODE_FW_TEST BIT(0)
#define IVPU_TEST_MODE_NULL_HW BIT(1)
#define IVPU_TEST_MODE_NULL_SUBMISSION BIT(2)
#define IVPU_TEST_MODE_D0I3_MSG_DISABLE BIT(4)
#define IVPU_TEST_MODE_D0I3_MSG_ENABLE BIT(5)
#define IVPU_TEST_MODE_PREEMPTION_DISABLE BIT(6)
#define IVPU_TEST_MODE_HWS_EXTRA_EVENTS BIT(7)
extern int ivpu_test_mode;
struct ivpu_file_priv *ivpu_file_priv_get(struct ivpu_file_priv *file_priv);
......@@ -184,16 +203,32 @@ static inline u16 ivpu_device_id(struct ivpu_device *vdev)
return to_pci_dev(vdev->drm.dev)->device;
}
static inline int ivpu_hw_gen(struct ivpu_device *vdev)
static inline int ivpu_hw_ip_gen(struct ivpu_device *vdev)
{
switch (ivpu_device_id(vdev)) {
case PCI_DEVICE_ID_MTL:
case PCI_DEVICE_ID_ARL:
return IVPU_HW_IP_37XX;
case PCI_DEVICE_ID_LNL:
return IVPU_HW_IP_40XX;
default:
dump_stack();
ivpu_err(vdev, "Unknown NPU IP generation\n");
return 0;
}
}
static inline int ivpu_hw_btrs_gen(struct ivpu_device *vdev)
{
switch (ivpu_device_id(vdev)) {
case PCI_DEVICE_ID_MTL:
case PCI_DEVICE_ID_ARL:
return IVPU_HW_37XX;
return IVPU_HW_BTRS_MTL;
case PCI_DEVICE_ID_LNL:
return IVPU_HW_40XX;
return IVPU_HW_BTRS_LNL;
default:
ivpu_err(vdev, "Unknown NPU device\n");
dump_stack();
ivpu_err(vdev, "Unknown buttress generation\n");
return 0;
}
}
......@@ -231,4 +266,9 @@ static inline bool ivpu_is_fpga(struct ivpu_device *vdev)
return ivpu_get_platform(vdev) == IVPU_PLATFORM_FPGA;
}
static inline bool ivpu_is_force_snoop_enabled(struct ivpu_device *vdev)
{
return ivpu_force_snoop;
}
#endif /* __IVPU_DRV_H__ */
......@@ -44,6 +44,8 @@
#define IVPU_FW_CHECK_API_VER_LT(vdev, fw_hdr, name, major, minor) \
ivpu_fw_check_api_ver_lt(vdev, fw_hdr, #name, VPU_##name##_API_VER_INDEX, major, minor)
#define IVPU_FOCUS_PRESENT_TIMER_MS 1000
static char *ivpu_firmware;
module_param_named_unsafe(firmware, ivpu_firmware, charp, 0644);
MODULE_PARM_DESC(firmware, "NPU firmware binary in /lib/firmware/..");
......@@ -52,10 +54,10 @@ static struct {
int gen;
const char *name;
} fw_names[] = {
{ IVPU_HW_37XX, "vpu_37xx.bin" },
{ IVPU_HW_37XX, "intel/vpu/vpu_37xx_v0.0.bin" },
{ IVPU_HW_40XX, "vpu_40xx.bin" },
{ IVPU_HW_40XX, "intel/vpu/vpu_40xx_v0.0.bin" },
{ IVPU_HW_IP_37XX, "vpu_37xx.bin" },
{ IVPU_HW_IP_37XX, "intel/vpu/vpu_37xx_v0.0.bin" },
{ IVPU_HW_IP_40XX, "vpu_40xx.bin" },
{ IVPU_HW_IP_40XX, "intel/vpu/vpu_40xx_v0.0.bin" },
};
static int ivpu_fw_request(struct ivpu_device *vdev)
......@@ -71,7 +73,7 @@ static int ivpu_fw_request(struct ivpu_device *vdev)
}
for (i = 0; i < ARRAY_SIZE(fw_names); i++) {
if (fw_names[i].gen != ivpu_hw_gen(vdev))
if (fw_names[i].gen != ivpu_hw_ip_gen(vdev))
continue;
ret = firmware_request_nowarn(&vdev->fw->file, fw_names[i].name, vdev->drm.dev);
......@@ -200,6 +202,9 @@ static int ivpu_fw_parse(struct ivpu_device *vdev)
fw->dvfs_mode = 0;
fw->primary_preempt_buf_size = fw_hdr->preemption_buffer_1_size;
fw->secondary_preempt_buf_size = fw_hdr->preemption_buffer_2_size;
ivpu_dbg(vdev, FW_BOOT, "Size: file %lu image %u runtime %u shavenn %u\n",
fw->file->size, fw->image_size, fw->runtime_size, fw->shave_nn_size);
ivpu_dbg(vdev, FW_BOOT, "Address: runtime 0x%llx, load 0x%llx, entry point 0x%llx\n",
......@@ -241,7 +246,7 @@ static int ivpu_fw_update_global_range(struct ivpu_device *vdev)
return -EINVAL;
}
ivpu_hw_init_range(&vdev->hw->ranges.global, start, size);
ivpu_hw_range_init(&vdev->hw->ranges.global, start, size);
return 0;
}
......@@ -464,6 +469,8 @@ static void ivpu_fw_boot_params_print(struct ivpu_device *vdev, struct vpu_boot_
boot_params->punit_telemetry_sram_size);
ivpu_dbg(vdev, FW_BOOT, "boot_params.vpu_telemetry_enable = 0x%x\n",
boot_params->vpu_telemetry_enable);
ivpu_dbg(vdev, FW_BOOT, "boot_params.vpu_scheduling_mode = 0x%x\n",
boot_params->vpu_scheduling_mode);
ivpu_dbg(vdev, FW_BOOT, "boot_params.dvfs_mode = %u\n",
boot_params->dvfs_mode);
ivpu_dbg(vdev, FW_BOOT, "boot_params.d0i3_delayed_entry = %d\n",
......@@ -504,7 +511,7 @@ void ivpu_fw_boot_params_setup(struct ivpu_device *vdev, struct vpu_boot_params
boot_params->magic = VPU_BOOT_PARAMS_MAGIC;
boot_params->vpu_id = to_pci_dev(vdev->drm.dev)->bus->number;
boot_params->frequency = ivpu_hw_reg_pll_freq_get(vdev);
boot_params->frequency = ivpu_hw_pll_freq_get(vdev);
/*
* This param is a debug firmware feature. It switches default clock
......@@ -561,9 +568,12 @@ void ivpu_fw_boot_params_setup(struct ivpu_device *vdev, struct vpu_boot_params
boot_params->verbose_tracing_buff_addr = vdev->fw->mem_log_verb->vpu_addr;
boot_params->verbose_tracing_buff_size = ivpu_bo_size(vdev->fw->mem_log_verb);
boot_params->punit_telemetry_sram_base = ivpu_hw_reg_telemetry_offset_get(vdev);
boot_params->punit_telemetry_sram_size = ivpu_hw_reg_telemetry_size_get(vdev);
boot_params->vpu_telemetry_enable = ivpu_hw_reg_telemetry_enable_get(vdev);
boot_params->punit_telemetry_sram_base = ivpu_hw_telemetry_offset_get(vdev);
boot_params->punit_telemetry_sram_size = ivpu_hw_telemetry_size_get(vdev);
boot_params->vpu_telemetry_enable = ivpu_hw_telemetry_enable_get(vdev);
boot_params->vpu_scheduling_mode = vdev->hw->sched_mode;
if (vdev->hw->sched_mode == VPU_SCHEDULING_MODE_HW)
boot_params->vpu_focus_present_timer_ms = IVPU_FOCUS_PRESENT_TIMER_MS;
boot_params->dvfs_mode = vdev->fw->dvfs_mode;
if (!IVPU_WA(disable_d0i3_msg))
boot_params->d0i3_delayed_entry = 1;
......
......@@ -28,6 +28,8 @@ struct ivpu_fw_info {
u32 trace_destination_mask;
u64 trace_hw_component_mask;
u32 dvfs_mode;
u32 primary_preempt_buf_size;
u32 secondary_preempt_buf_size;
};
int ivpu_fw_init(struct ivpu_device *vdev);
......
......@@ -60,14 +60,17 @@ static inline u32 ivpu_bo_cache_mode(struct ivpu_bo *bo)
return bo->flags & DRM_IVPU_BO_CACHE_MASK;
}
static inline bool ivpu_bo_is_snooped(struct ivpu_bo *bo)
static inline struct ivpu_device *ivpu_bo_to_vdev(struct ivpu_bo *bo)
{
return ivpu_bo_cache_mode(bo) == DRM_IVPU_BO_CACHED;
return to_ivpu_device(bo->base.base.dev);
}
static inline struct ivpu_device *ivpu_bo_to_vdev(struct ivpu_bo *bo)
static inline bool ivpu_bo_is_snooped(struct ivpu_bo *bo)
{
return to_ivpu_device(bo->base.base.dev);
if (ivpu_is_force_snoop_enabled(ivpu_bo_to_vdev(bo)))
return true;
return ivpu_bo_cache_mode(bo) == DRM_IVPU_BO_CACHED;
}
static inline void *ivpu_to_cpu_addr(struct ivpu_bo *bo, u32 vpu_addr)
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2020 - 2024 Intel Corporation
*/
#include "ivpu_drv.h"
#include "ivpu_hw.h"
#include "ivpu_hw_btrs.h"
#include "ivpu_hw_ip.h"
#include <linux/dmi.h>
static char *platform_to_str(u32 platform)
{
switch (platform) {
case IVPU_PLATFORM_SILICON:
return "SILICON";
case IVPU_PLATFORM_SIMICS:
return "SIMICS";
case IVPU_PLATFORM_FPGA:
return "FPGA";
default:
return "Invalid platform";
}
}
static const struct dmi_system_id dmi_platform_simulation[] = {
{
.ident = "Intel Simics",
.matches = {
DMI_MATCH(DMI_BOARD_NAME, "lnlrvp"),
DMI_MATCH(DMI_BOARD_VERSION, "1.0"),
DMI_MATCH(DMI_BOARD_SERIAL, "123456789"),
},
},
{
.ident = "Intel Simics",
.matches = {
DMI_MATCH(DMI_BOARD_NAME, "Simics"),
},
},
{ }
};
static void platform_init(struct ivpu_device *vdev)
{
if (dmi_check_system(dmi_platform_simulation))
vdev->platform = IVPU_PLATFORM_SIMICS;
else
vdev->platform = IVPU_PLATFORM_SILICON;
ivpu_dbg(vdev, MISC, "Platform type: %s (%d)\n",
platform_to_str(vdev->platform), vdev->platform);
}
static void wa_init(struct ivpu_device *vdev)
{
vdev->wa.punit_disabled = ivpu_is_fpga(vdev);
vdev->wa.clear_runtime_mem = false;
if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL)
vdev->wa.interrupt_clear_with_0 = ivpu_hw_btrs_irqs_clear_with_0_mtl(vdev);
if (ivpu_device_id(vdev) == PCI_DEVICE_ID_LNL)
vdev->wa.disable_clock_relinquish = true;
IVPU_PRINT_WA(punit_disabled);
IVPU_PRINT_WA(clear_runtime_mem);
IVPU_PRINT_WA(interrupt_clear_with_0);
IVPU_PRINT_WA(disable_clock_relinquish);
}
static void timeouts_init(struct ivpu_device *vdev)
{
if (ivpu_is_fpga(vdev)) {
vdev->timeout.boot = 100000;
vdev->timeout.jsm = 50000;
vdev->timeout.tdr = 2000000;
vdev->timeout.reschedule_suspend = 1000;
vdev->timeout.autosuspend = -1;
vdev->timeout.d0i3_entry_msg = 500;
} else if (ivpu_is_simics(vdev)) {
vdev->timeout.boot = 50;
vdev->timeout.jsm = 500;
vdev->timeout.tdr = 10000;
vdev->timeout.reschedule_suspend = 10;
vdev->timeout.autosuspend = -1;
vdev->timeout.d0i3_entry_msg = 100;
} else {
vdev->timeout.boot = 1000;
vdev->timeout.jsm = 500;
vdev->timeout.tdr = 2000;
vdev->timeout.reschedule_suspend = 10;
vdev->timeout.autosuspend = 10;
vdev->timeout.d0i3_entry_msg = 5;
}
}
static void memory_ranges_init(struct ivpu_device *vdev)
{
if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) {
ivpu_hw_range_init(&vdev->hw->ranges.global, 0x80000000, SZ_512M);
ivpu_hw_range_init(&vdev->hw->ranges.user, 0xc0000000, 255 * SZ_1M);
ivpu_hw_range_init(&vdev->hw->ranges.shave, 0x180000000, SZ_2G);
ivpu_hw_range_init(&vdev->hw->ranges.dma, 0x200000000, SZ_8G);
} else {
ivpu_hw_range_init(&vdev->hw->ranges.global, 0x80000000, SZ_512M);
ivpu_hw_range_init(&vdev->hw->ranges.user, 0x80000000, SZ_256M);
ivpu_hw_range_init(&vdev->hw->ranges.shave, 0x80000000 + SZ_256M, SZ_2G - SZ_256M);
ivpu_hw_range_init(&vdev->hw->ranges.dma, 0x200000000, SZ_8G);
}
}
static int wp_enable(struct ivpu_device *vdev)
{
return ivpu_hw_btrs_wp_drive(vdev, true);
}
static int wp_disable(struct ivpu_device *vdev)
{
return ivpu_hw_btrs_wp_drive(vdev, false);
}
int ivpu_hw_power_up(struct ivpu_device *vdev)
{
int ret;
ret = ivpu_hw_btrs_d0i3_disable(vdev);
if (ret)
ivpu_warn(vdev, "Failed to disable D0I3: %d\n", ret);
ret = wp_enable(vdev);
if (ret) {
ivpu_err(vdev, "Failed to enable workpoint: %d\n", ret);
return ret;
}
if (ivpu_hw_btrs_gen(vdev) >= IVPU_HW_BTRS_LNL) {
if (IVPU_WA(disable_clock_relinquish))
ivpu_hw_btrs_clock_relinquish_disable_lnl(vdev);
ivpu_hw_btrs_profiling_freq_reg_set_lnl(vdev);
ivpu_hw_btrs_ats_print_lnl(vdev);
}
ret = ivpu_hw_ip_host_ss_configure(vdev);
if (ret) {
ivpu_err(vdev, "Failed to configure host SS: %d\n", ret);
return ret;
}
ivpu_hw_ip_idle_gen_disable(vdev);
ret = ivpu_hw_btrs_wait_for_clock_res_own_ack(vdev);
if (ret) {
ivpu_err(vdev, "Timed out waiting for clock resource own ACK\n");
return ret;
}
ret = ivpu_hw_ip_pwr_domain_enable(vdev);
if (ret) {
ivpu_err(vdev, "Failed to enable power domain: %d\n", ret);
return ret;
}
ret = ivpu_hw_ip_host_ss_axi_enable(vdev);
if (ret) {
ivpu_err(vdev, "Failed to enable AXI: %d\n", ret);
return ret;
}
if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_LNL)
ivpu_hw_btrs_set_port_arbitration_weights_lnl(vdev);
ret = ivpu_hw_ip_top_noc_enable(vdev);
if (ret)
ivpu_err(vdev, "Failed to enable TOP NOC: %d\n", ret);
return ret;
}
static void save_d0i3_entry_timestamp(struct ivpu_device *vdev)
{
vdev->hw->d0i3_entry_host_ts = ktime_get_boottime();
vdev->hw->d0i3_entry_vpu_ts = ivpu_hw_ip_read_perf_timer_counter(vdev);
}
int ivpu_hw_reset(struct ivpu_device *vdev)
{
int ret = 0;
if (ivpu_hw_btrs_ip_reset(vdev)) {
ivpu_err(vdev, "Failed to reset NPU IP\n");
ret = -EIO;
}
if (wp_disable(vdev)) {
ivpu_err(vdev, "Failed to disable workpoint\n");
ret = -EIO;
}
return ret;
}
int ivpu_hw_power_down(struct ivpu_device *vdev)
{
int ret = 0;
save_d0i3_entry_timestamp(vdev);
if (!ivpu_hw_is_idle(vdev))
ivpu_warn(vdev, "NPU not idle during power down\n");
if (ivpu_hw_reset(vdev)) {
ivpu_err(vdev, "Failed to reset NPU\n");
ret = -EIO;
}
if (ivpu_hw_btrs_d0i3_enable(vdev)) {
ivpu_err(vdev, "Failed to enter D0I3\n");
ret = -EIO;
}
return ret;
}
int ivpu_hw_init(struct ivpu_device *vdev)
{
ivpu_hw_btrs_info_init(vdev);
ivpu_hw_btrs_freq_ratios_init(vdev);
memory_ranges_init(vdev);
platform_init(vdev);
wa_init(vdev);
timeouts_init(vdev);
return 0;
}
int ivpu_hw_boot_fw(struct ivpu_device *vdev)
{
int ret;
ivpu_hw_ip_snoop_disable(vdev);
ivpu_hw_ip_tbu_mmu_enable(vdev);
ret = ivpu_hw_ip_soc_cpu_boot(vdev);
if (ret)
ivpu_err(vdev, "Failed to boot SOC CPU: %d\n", ret);
return ret;
}
void ivpu_hw_profiling_freq_drive(struct ivpu_device *vdev, bool enable)
{
if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) {
vdev->hw->pll.profiling_freq = PLL_PROFILING_FREQ_DEFAULT;
return;
}
if (enable)
vdev->hw->pll.profiling_freq = PLL_PROFILING_FREQ_HIGH;
else
vdev->hw->pll.profiling_freq = PLL_PROFILING_FREQ_DEFAULT;
}
void ivpu_irq_handlers_init(struct ivpu_device *vdev)
{
INIT_KFIFO(vdev->hw->irq.fifo);
if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX)
vdev->hw->irq.ip_irq_handler = ivpu_hw_ip_irq_handler_37xx;
else
vdev->hw->irq.ip_irq_handler = ivpu_hw_ip_irq_handler_40xx;
if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL)
vdev->hw->irq.btrs_irq_handler = ivpu_hw_btrs_irq_handler_mtl;
else
vdev->hw->irq.btrs_irq_handler = ivpu_hw_btrs_irq_handler_lnl;
}
void ivpu_hw_irq_enable(struct ivpu_device *vdev)
{
kfifo_reset(&vdev->hw->irq.fifo);
ivpu_hw_ip_irq_enable(vdev);
ivpu_hw_btrs_irq_enable(vdev);
}
void ivpu_hw_irq_disable(struct ivpu_device *vdev)
{
ivpu_hw_btrs_irq_disable(vdev);
ivpu_hw_ip_irq_disable(vdev);
}
irqreturn_t ivpu_hw_irq_handler(int irq, void *ptr)
{
struct ivpu_device *vdev = ptr;
bool ip_handled, btrs_handled;
ivpu_hw_btrs_global_int_disable(vdev);
btrs_handled = ivpu_hw_btrs_irq_handler(vdev, irq);
if (!ivpu_hw_is_idle((vdev)) || !btrs_handled)
ip_handled = ivpu_hw_ip_irq_handler(vdev, irq);
else
ip_handled = false;
/* Re-enable global interrupts to re-trigger MSI for pending interrupts */
ivpu_hw_btrs_global_int_enable(vdev);
if (!kfifo_is_empty(&vdev->hw->irq.fifo))
return IRQ_WAKE_THREAD;
if (ip_handled || btrs_handled)
return IRQ_HANDLED;
return IRQ_NONE;
}
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2020-2023 Intel Corporation
* Copyright (C) 2020 - 2024 Intel Corporation
*/
#ifndef __IVPU_HW_H__
#define __IVPU_HW_H__
#include <linux/kfifo.h>
#include "ivpu_drv.h"
#include "ivpu_hw_btrs.h"
#include "ivpu_hw_ip.h"
struct ivpu_hw_ops {
int (*info_init)(struct ivpu_device *vdev);
int (*power_up)(struct ivpu_device *vdev);
int (*boot_fw)(struct ivpu_device *vdev);
int (*power_down)(struct ivpu_device *vdev);
int (*reset)(struct ivpu_device *vdev);
bool (*is_idle)(struct ivpu_device *vdev);
int (*wait_for_idle)(struct ivpu_device *vdev);
void (*wdt_disable)(struct ivpu_device *vdev);
void (*diagnose_failure)(struct ivpu_device *vdev);
u32 (*profiling_freq_get)(struct ivpu_device *vdev);
void (*profiling_freq_drive)(struct ivpu_device *vdev, bool enable);
u32 (*reg_pll_freq_get)(struct ivpu_device *vdev);
u32 (*ratio_to_freq)(struct ivpu_device *vdev, u32 ratio);
u32 (*reg_telemetry_offset_get)(struct ivpu_device *vdev);
u32 (*reg_telemetry_size_get)(struct ivpu_device *vdev);
u32 (*reg_telemetry_enable_get)(struct ivpu_device *vdev);
void (*reg_db_set)(struct ivpu_device *vdev, u32 db_id);
u32 (*reg_ipc_rx_addr_get)(struct ivpu_device *vdev);
u32 (*reg_ipc_rx_count_get)(struct ivpu_device *vdev);
void (*reg_ipc_tx_set)(struct ivpu_device *vdev, u32 vpu_addr);
void (*irq_clear)(struct ivpu_device *vdev);
void (*irq_enable)(struct ivpu_device *vdev);
void (*irq_disable)(struct ivpu_device *vdev);
irqreturn_t (*irq_handler)(int irq, void *ptr);
};
#define IVPU_HW_IRQ_FIFO_LENGTH 1024
#define IVPU_HW_IRQ_SRC_IPC 1
struct ivpu_addr_range {
resource_size_t start;
......@@ -41,7 +22,11 @@ struct ivpu_addr_range {
};
struct ivpu_hw_info {
const struct ivpu_hw_ops *ops;
struct {
bool (*btrs_irq_handler)(struct ivpu_device *vdev, int irq);
bool (*ip_irq_handler)(struct ivpu_device *vdev, int irq);
DECLARE_KFIFO(fifo, u8, IVPU_HW_IRQ_FIFO_LENGTH);
} irq;
struct {
struct ivpu_addr_range global;
struct ivpu_addr_range user;
......@@ -59,6 +44,7 @@ struct ivpu_hw_info {
u32 profiling_freq;
} pll;
u32 tile_fuse;
u32 sched_mode;
u32 sku;
u16 config;
int dma_bits;
......@@ -66,140 +52,107 @@ struct ivpu_hw_info {
u64 d0i3_entry_vpu_ts;
};
extern const struct ivpu_hw_ops ivpu_hw_37xx_ops;
extern const struct ivpu_hw_ops ivpu_hw_40xx_ops;
static inline int ivpu_hw_info_init(struct ivpu_device *vdev)
{
return vdev->hw->ops->info_init(vdev);
};
static inline int ivpu_hw_power_up(struct ivpu_device *vdev)
{
ivpu_dbg(vdev, PM, "HW power up\n");
int ivpu_hw_init(struct ivpu_device *vdev);
int ivpu_hw_power_up(struct ivpu_device *vdev);
int ivpu_hw_power_down(struct ivpu_device *vdev);
int ivpu_hw_reset(struct ivpu_device *vdev);
int ivpu_hw_boot_fw(struct ivpu_device *vdev);
void ivpu_hw_profiling_freq_drive(struct ivpu_device *vdev, bool enable);
void ivpu_irq_handlers_init(struct ivpu_device *vdev);
void ivpu_hw_irq_enable(struct ivpu_device *vdev);
void ivpu_hw_irq_disable(struct ivpu_device *vdev);
irqreturn_t ivpu_hw_irq_handler(int irq, void *ptr);
return vdev->hw->ops->power_up(vdev);
};
static inline int ivpu_hw_boot_fw(struct ivpu_device *vdev)
{
return vdev->hw->ops->boot_fw(vdev);
};
static inline bool ivpu_hw_is_idle(struct ivpu_device *vdev)
{
return vdev->hw->ops->is_idle(vdev);
};
static inline int ivpu_hw_wait_for_idle(struct ivpu_device *vdev)
static inline u32 ivpu_hw_btrs_irq_handler(struct ivpu_device *vdev, int irq)
{
return vdev->hw->ops->wait_for_idle(vdev);
};
static inline int ivpu_hw_power_down(struct ivpu_device *vdev)
{
ivpu_dbg(vdev, PM, "HW power down\n");
return vdev->hw->ops->power_down(vdev);
};
static inline int ivpu_hw_reset(struct ivpu_device *vdev)
{
ivpu_dbg(vdev, PM, "HW reset\n");
return vdev->hw->ops->reset(vdev);
};
static inline void ivpu_hw_wdt_disable(struct ivpu_device *vdev)
{
vdev->hw->ops->wdt_disable(vdev);
};
return vdev->hw->irq.btrs_irq_handler(vdev, irq);
}
static inline u32 ivpu_hw_profiling_freq_get(struct ivpu_device *vdev)
static inline u32 ivpu_hw_ip_irq_handler(struct ivpu_device *vdev, int irq)
{
return vdev->hw->ops->profiling_freq_get(vdev);
};
return vdev->hw->irq.ip_irq_handler(vdev, irq);
}
static inline void ivpu_hw_profiling_freq_drive(struct ivpu_device *vdev, bool enable)
static inline void ivpu_hw_range_init(struct ivpu_addr_range *range, u64 start, u64 size)
{
return vdev->hw->ops->profiling_freq_drive(vdev, enable);
};
range->start = start;
range->end = start + size;
}
/* Register indirect accesses */
static inline u32 ivpu_hw_reg_pll_freq_get(struct ivpu_device *vdev)
static inline u64 ivpu_hw_range_size(const struct ivpu_addr_range *range)
{
return vdev->hw->ops->reg_pll_freq_get(vdev);
};
return range->end - range->start;
}
static inline u32 ivpu_hw_ratio_to_freq(struct ivpu_device *vdev, u32 ratio)
{
return vdev->hw->ops->ratio_to_freq(vdev, ratio);
return ivpu_hw_btrs_ratio_to_freq(vdev, ratio);
}
static inline u32 ivpu_hw_reg_telemetry_offset_get(struct ivpu_device *vdev)
static inline void ivpu_hw_irq_clear(struct ivpu_device *vdev)
{
return vdev->hw->ops->reg_telemetry_offset_get(vdev);
};
ivpu_hw_ip_irq_clear(vdev);
}
static inline u32 ivpu_hw_reg_telemetry_size_get(struct ivpu_device *vdev)
static inline u32 ivpu_hw_pll_freq_get(struct ivpu_device *vdev)
{
return vdev->hw->ops->reg_telemetry_size_get(vdev);
};
return ivpu_hw_btrs_pll_freq_get(vdev);
}
static inline u32 ivpu_hw_reg_telemetry_enable_get(struct ivpu_device *vdev)
static inline u32 ivpu_hw_profiling_freq_get(struct ivpu_device *vdev)
{
return vdev->hw->ops->reg_telemetry_enable_get(vdev);
};
return vdev->hw->pll.profiling_freq;
}
static inline void ivpu_hw_reg_db_set(struct ivpu_device *vdev, u32 db_id)
static inline void ivpu_hw_diagnose_failure(struct ivpu_device *vdev)
{
vdev->hw->ops->reg_db_set(vdev, db_id);
};
ivpu_hw_ip_diagnose_failure(vdev);
ivpu_hw_btrs_diagnose_failure(vdev);
}
static inline u32 ivpu_hw_reg_ipc_rx_addr_get(struct ivpu_device *vdev)
static inline u32 ivpu_hw_telemetry_offset_get(struct ivpu_device *vdev)
{
return vdev->hw->ops->reg_ipc_rx_addr_get(vdev);
};
return ivpu_hw_btrs_telemetry_offset_get(vdev);
}
static inline u32 ivpu_hw_reg_ipc_rx_count_get(struct ivpu_device *vdev)
static inline u32 ivpu_hw_telemetry_size_get(struct ivpu_device *vdev)
{
return vdev->hw->ops->reg_ipc_rx_count_get(vdev);
};
return ivpu_hw_btrs_telemetry_size_get(vdev);
}
static inline void ivpu_hw_reg_ipc_tx_set(struct ivpu_device *vdev, u32 vpu_addr)
static inline u32 ivpu_hw_telemetry_enable_get(struct ivpu_device *vdev)
{
vdev->hw->ops->reg_ipc_tx_set(vdev, vpu_addr);
};
return ivpu_hw_btrs_telemetry_enable_get(vdev);
}
static inline void ivpu_hw_irq_clear(struct ivpu_device *vdev)
static inline bool ivpu_hw_is_idle(struct ivpu_device *vdev)
{
vdev->hw->ops->irq_clear(vdev);
};
return ivpu_hw_btrs_is_idle(vdev);
}
static inline void ivpu_hw_irq_enable(struct ivpu_device *vdev)
static inline int ivpu_hw_wait_for_idle(struct ivpu_device *vdev)
{
vdev->hw->ops->irq_enable(vdev);
};
return ivpu_hw_btrs_wait_for_idle(vdev);
}
static inline void ivpu_hw_irq_disable(struct ivpu_device *vdev)
static inline void ivpu_hw_ipc_tx_set(struct ivpu_device *vdev, u32 vpu_addr)
{
vdev->hw->ops->irq_disable(vdev);
};
ivpu_hw_ip_ipc_tx_set(vdev, vpu_addr);
}
static inline void ivpu_hw_init_range(struct ivpu_addr_range *range, u64 start, u64 size)
static inline void ivpu_hw_db_set(struct ivpu_device *vdev, u32 db_id)
{
range->start = start;
range->end = start + size;
ivpu_hw_ip_db_set(vdev, db_id);
}
static inline u64 ivpu_hw_range_size(const struct ivpu_addr_range *range)
static inline u32 ivpu_hw_ipc_rx_addr_get(struct ivpu_device *vdev)
{
return range->end - range->start;
return ivpu_hw_ip_ipc_rx_addr_get(vdev);
}
static inline void ivpu_hw_diagnose_failure(struct ivpu_device *vdev)
static inline u32 ivpu_hw_ipc_rx_count_get(struct ivpu_device *vdev)
{
vdev->hw->ops->diagnose_failure(vdev);
return ivpu_hw_ip_ipc_rx_count_get(vdev);
}
#endif /* __IVPU_HW_H__ */
This diff is collapsed.
......@@ -8,78 +8,6 @@
#include <linux/bits.h>
#define VPU_37XX_BUTTRESS_INTERRUPT_TYPE 0x00000000u
#define VPU_37XX_BUTTRESS_INTERRUPT_STAT 0x00000004u
#define VPU_37XX_BUTTRESS_INTERRUPT_STAT_FREQ_CHANGE_MASK BIT_MASK(0)
#define VPU_37XX_BUTTRESS_INTERRUPT_STAT_ATS_ERR_MASK BIT_MASK(1)
#define VPU_37XX_BUTTRESS_INTERRUPT_STAT_UFI_ERR_MASK BIT_MASK(2)
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD0 0x00000008u
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD0_MIN_RATIO_MASK GENMASK(15, 0)
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD0_MAX_RATIO_MASK GENMASK(31, 16)
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD1 0x0000000cu
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD1_TARGET_RATIO_MASK GENMASK(15, 0)
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD1_EPP_MASK GENMASK(31, 16)
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD2 0x00000010u
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD2_CONFIG_MASK GENMASK(15, 0)
#define VPU_37XX_BUTTRESS_WP_REQ_CMD 0x00000014u
#define VPU_37XX_BUTTRESS_WP_REQ_CMD_SEND_MASK BIT_MASK(0)
#define VPU_37XX_BUTTRESS_WP_DOWNLOAD 0x00000018u
#define VPU_37XX_BUTTRESS_WP_DOWNLOAD_TARGET_RATIO_MASK GENMASK(15, 0)
#define VPU_37XX_BUTTRESS_CURRENT_PLL 0x0000001cu
#define VPU_37XX_BUTTRESS_CURRENT_PLL_RATIO_MASK GENMASK(15, 0)
#define VPU_37XX_BUTTRESS_PLL_ENABLE 0x00000020u
#define VPU_37XX_BUTTRESS_FMIN_FUSE 0x00000024u
#define VPU_37XX_BUTTRESS_FMIN_FUSE_MIN_RATIO_MASK GENMASK(7, 0)
#define VPU_37XX_BUTTRESS_FMIN_FUSE_PN_RATIO_MASK GENMASK(15, 8)
#define VPU_37XX_BUTTRESS_FMAX_FUSE 0x00000028u
#define VPU_37XX_BUTTRESS_FMAX_FUSE_MAX_RATIO_MASK GENMASK(7, 0)
#define VPU_37XX_BUTTRESS_TILE_FUSE 0x0000002cu
#define VPU_37XX_BUTTRESS_TILE_FUSE_VALID_MASK BIT_MASK(0)
#define VPU_37XX_BUTTRESS_TILE_FUSE_SKU_MASK GENMASK(3, 2)
#define VPU_37XX_BUTTRESS_LOCAL_INT_MASK 0x00000030u
#define VPU_37XX_BUTTRESS_GLOBAL_INT_MASK 0x00000034u
#define VPU_37XX_BUTTRESS_PLL_STATUS 0x00000040u
#define VPU_37XX_BUTTRESS_PLL_STATUS_LOCK_MASK BIT_MASK(1)
#define VPU_37XX_BUTTRESS_VPU_STATUS 0x00000044u
#define VPU_37XX_BUTTRESS_VPU_STATUS_READY_MASK BIT_MASK(0)
#define VPU_37XX_BUTTRESS_VPU_STATUS_IDLE_MASK BIT_MASK(1)
#define VPU_37XX_BUTTRESS_VPU_D0I3_CONTROL 0x00000060u
#define VPU_37XX_BUTTRESS_VPU_D0I3_CONTROL_INPROGRESS_MASK BIT_MASK(0)
#define VPU_37XX_BUTTRESS_VPU_D0I3_CONTROL_I3_MASK BIT_MASK(2)
#define VPU_37XX_BUTTRESS_VPU_IP_RESET 0x00000050u
#define VPU_37XX_BUTTRESS_VPU_IP_RESET_TRIGGER_MASK BIT_MASK(0)
#define VPU_37XX_BUTTRESS_VPU_TELEMETRY_OFFSET 0x00000080u
#define VPU_37XX_BUTTRESS_VPU_TELEMETRY_SIZE 0x00000084u
#define VPU_37XX_BUTTRESS_VPU_TELEMETRY_ENABLE 0x00000088u
#define VPU_37XX_BUTTRESS_ATS_ERR_LOG_0 0x000000a0u
#define VPU_37XX_BUTTRESS_ATS_ERR_LOG_1 0x000000a4u
#define VPU_37XX_BUTTRESS_ATS_ERR_CLEAR 0x000000a8u
#define VPU_37XX_BUTTRESS_UFI_ERR_LOG 0x000000b0u
#define VPU_37XX_BUTTRESS_UFI_ERR_LOG_CQ_ID_MASK GENMASK(11, 0)
#define VPU_37XX_BUTTRESS_UFI_ERR_LOG_AXI_ID_MASK GENMASK(19, 12)
#define VPU_37XX_BUTTRESS_UFI_ERR_LOG_OPCODE_MASK GENMASK(24, 20)
#define VPU_37XX_BUTTRESS_UFI_ERR_CLEAR 0x000000b4u
#define VPU_37XX_HOST_SS_CPR_CLK_SET 0x00000084u
#define VPU_37XX_HOST_SS_CPR_CLK_SET_TOP_NOC_MASK BIT_MASK(1)
#define VPU_37XX_HOST_SS_CPR_CLK_SET_DSS_MAS_MASK BIT_MASK(10)
......
This diff is collapsed.
......@@ -8,91 +8,6 @@
#include <linux/bits.h>
#define VPU_40XX_BUTTRESS_INTERRUPT_STAT 0x00000000u
#define VPU_40XX_BUTTRESS_INTERRUPT_STAT_FREQ_CHANGE_MASK BIT_MASK(0)
#define VPU_40XX_BUTTRESS_INTERRUPT_STAT_ATS_ERR_MASK BIT_MASK(1)
#define VPU_40XX_BUTTRESS_INTERRUPT_STAT_CFI0_ERR_MASK BIT_MASK(2)
#define VPU_40XX_BUTTRESS_INTERRUPT_STAT_CFI1_ERR_MASK BIT_MASK(3)
#define VPU_40XX_BUTTRESS_INTERRUPT_STAT_IMR0_ERR_MASK BIT_MASK(4)
#define VPU_40XX_BUTTRESS_INTERRUPT_STAT_IMR1_ERR_MASK BIT_MASK(5)
#define VPU_40XX_BUTTRESS_INTERRUPT_STAT_SURV_ERR_MASK BIT_MASK(6)
#define VPU_40XX_BUTTRESS_LOCAL_INT_MASK 0x00000004u
#define VPU_40XX_BUTTRESS_GLOBAL_INT_MASK 0x00000008u
#define VPU_40XX_BUTTRESS_HM_ATS 0x0000000cu
#define VPU_40XX_BUTTRESS_ATS_ERR_LOG1 0x00000010u
#define VPU_40XX_BUTTRESS_ATS_ERR_LOG2 0x00000014u
#define VPU_40XX_BUTTRESS_ATS_ERR_CLEAR 0x00000018u
#define VPU_40XX_BUTTRESS_CFI0_ERR_LOG 0x0000001cu
#define VPU_40XX_BUTTRESS_CFI0_ERR_CLEAR 0x00000020u
#define VPU_40XX_BUTTRESS_PORT_ARBITRATION_WEIGHTS_ATS 0x00000024u
#define VPU_40XX_BUTTRESS_CFI1_ERR_LOG 0x00000040u
#define VPU_40XX_BUTTRESS_CFI1_ERR_CLEAR 0x00000044u
#define VPU_40XX_BUTTRESS_IMR_ERR_CFI0_LOW 0x00000048u
#define VPU_40XX_BUTTRESS_IMR_ERR_CFI0_HIGH 0x0000004cu
#define VPU_40XX_BUTTRESS_IMR_ERR_CFI0_CLEAR 0x00000050u
#define VPU_40XX_BUTTRESS_PORT_ARBITRATION_WEIGHTS 0x00000054u
#define VPU_40XX_BUTTRESS_IMR_ERR_CFI1_LOW 0x00000058u
#define VPU_40XX_BUTTRESS_IMR_ERR_CFI1_HIGH 0x0000005cu
#define VPU_40XX_BUTTRESS_IMR_ERR_CFI1_CLEAR 0x00000060u
#define VPU_40XX_BUTTRESS_WP_REQ_PAYLOAD0 0x00000130u
#define VPU_40XX_BUTTRESS_WP_REQ_PAYLOAD0_MIN_RATIO_MASK GENMASK(15, 0)
#define VPU_40XX_BUTTRESS_WP_REQ_PAYLOAD0_MAX_RATIO_MASK GENMASK(31, 16)
#define VPU_40XX_BUTTRESS_WP_REQ_PAYLOAD1 0x00000134u
#define VPU_40XX_BUTTRESS_WP_REQ_PAYLOAD1_TARGET_RATIO_MASK GENMASK(15, 0)
#define VPU_40XX_BUTTRESS_WP_REQ_PAYLOAD1_EPP_MASK GENMASK(31, 16)
#define VPU_40XX_BUTTRESS_WP_REQ_PAYLOAD2 0x00000138u
#define VPU_40XX_BUTTRESS_WP_REQ_PAYLOAD2_CONFIG_MASK GENMASK(15, 0)
#define VPU_40XX_BUTTRESS_WP_REQ_PAYLOAD2_CDYN_MASK GENMASK(31, 16)
#define VPU_40XX_BUTTRESS_WP_REQ_CMD 0x0000013cu
#define VPU_40XX_BUTTRESS_WP_REQ_CMD_SEND_MASK BIT_MASK(0)
#define VPU_40XX_BUTTRESS_PLL_FREQ 0x00000148u
#define VPU_40XX_BUTTRESS_PLL_FREQ_RATIO_MASK GENMASK(15, 0)
#define VPU_40XX_BUTTRESS_TILE_FUSE 0x00000150u
#define VPU_40XX_BUTTRESS_TILE_FUSE_VALID_MASK BIT_MASK(0)
#define VPU_40XX_BUTTRESS_TILE_FUSE_CONFIG_MASK GENMASK(6, 1)
#define VPU_40XX_BUTTRESS_VPU_STATUS 0x00000154u
#define VPU_40XX_BUTTRESS_VPU_STATUS_READY_MASK BIT_MASK(0)
#define VPU_40XX_BUTTRESS_VPU_STATUS_IDLE_MASK BIT_MASK(1)
#define VPU_40XX_BUTTRESS_VPU_STATUS_DUP_IDLE_MASK BIT_MASK(2)
#define VPU_40XX_BUTTRESS_VPU_STATUS_CLOCK_RESOURCE_OWN_ACK_MASK BIT_MASK(6)
#define VPU_40XX_BUTTRESS_VPU_STATUS_POWER_RESOURCE_OWN_ACK_MASK BIT_MASK(7)
#define VPU_40XX_BUTTRESS_VPU_STATUS_PERF_CLK_MASK BIT_MASK(11)
#define VPU_40XX_BUTTRESS_VPU_STATUS_DISABLE_CLK_RELINQUISH_MASK BIT_MASK(12)
#define VPU_40XX_BUTTRESS_IP_RESET 0x00000160u
#define VPU_40XX_BUTTRESS_IP_RESET_TRIGGER_MASK BIT_MASK(0)
#define VPU_40XX_BUTTRESS_D0I3_CONTROL 0x00000164u
#define VPU_40XX_BUTTRESS_D0I3_CONTROL_INPROGRESS_MASK BIT_MASK(0)
#define VPU_40XX_BUTTRESS_D0I3_CONTROL_I3_MASK BIT_MASK(2)
#define VPU_40XX_BUTTRESS_VPU_TELEMETRY_OFFSET 0x00000168u
#define VPU_40XX_BUTTRESS_VPU_TELEMETRY_SIZE 0x0000016cu
#define VPU_40XX_BUTTRESS_VPU_TELEMETRY_ENABLE 0x00000170u
#define VPU_40XX_BUTTRESS_FMIN_FUSE 0x00000174u
#define VPU_40XX_BUTTRESS_FMIN_FUSE_MIN_RATIO_MASK GENMASK(7, 0)
#define VPU_40XX_BUTTRESS_FMIN_FUSE_PN_RATIO_MASK GENMASK(15, 8)
#define VPU_40XX_BUTTRESS_FMAX_FUSE 0x00000178u
#define VPU_40XX_BUTTRESS_FMAX_FUSE_MAX_RATIO_MASK GENMASK(7, 0)
#define VPU_40XX_HOST_SS_CPR_CLK_EN 0x00000080u
#define VPU_40XX_HOST_SS_CPR_CLK_EN_TOP_NOC_MASK BIT_MASK(1)
#define VPU_40XX_HOST_SS_CPR_CLK_EN_DSS_MAS_MASK BIT_MASK(10)
......@@ -198,6 +113,12 @@
#define VPU_40XX_HOST_SS_AON_PWR_ISLAND_STATUS0 0x0003002cu
#define VPU_40XX_HOST_SS_AON_PWR_ISLAND_STATUS0_CSS_CPU_MASK BIT_MASK(3)
#define VPU_50XX_HOST_SS_AON_PWR_ISLAND_EN_POST_DLY 0x00030068u
#define VPU_50XX_HOST_SS_AON_PWR_ISLAND_EN_POST_DLY_POST_DLY_MASK GENMASK(7, 0)
#define VPU_50XX_HOST_SS_AON_PWR_ISLAND_STATUS_DLY 0x0003006cu
#define VPU_50XX_HOST_SS_AON_PWR_ISLAND_STATUS_DLY_STATUS_DLY_MASK GENMASK(7, 0)
#define VPU_40XX_HOST_SS_AON_IDLE_GEN 0x00030200u
#define VPU_40XX_HOST_SS_AON_IDLE_GEN_EN_MASK BIT_MASK(0)
#define VPU_40XX_HOST_SS_AON_IDLE_GEN_HW_PG_EN_MASK BIT_MASK(1)
......@@ -205,6 +126,9 @@
#define VPU_40XX_HOST_SS_AON_DPU_ACTIVE 0x00030204u
#define VPU_40XX_HOST_SS_AON_DPU_ACTIVE_DPU_ACTIVE_MASK BIT_MASK(0)
#define VPU_50XX_HOST_SS_AON_FABRIC_REQ_OVERRIDE 0x00030210u
#define VPU_50XX_HOST_SS_AON_FABRIC_REQ_OVERRIDE_REQ_OVERRIDE_MASK BIT_MASK(0)
#define VPU_40XX_HOST_SS_VERIFICATION_ADDRESS_LO 0x00040040u
#define VPU_40XX_HOST_SS_VERIFICATION_ADDRESS_LO_DONE_MASK BIT_MASK(0)
#define VPU_40XX_HOST_SS_VERIFICATION_ADDRESS_LO_IOSF_RS_ID_MASK GENMASK(2, 1)
......
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2020-2024 Intel Corporation
*/
#ifndef __IVPU_HW_BTRS_H__
#define __IVPU_HW_BTRS_H__
#include "ivpu_drv.h"
#include "ivpu_hw_37xx_reg.h"
#include "ivpu_hw_40xx_reg.h"
#include "ivpu_hw_reg_io.h"
#define PLL_PROFILING_FREQ_DEFAULT 38400000
#define PLL_PROFILING_FREQ_HIGH 400000000
#define PLL_RATIO_TO_FREQ(x) ((x) * PLL_REF_CLK_FREQ)
int ivpu_hw_btrs_info_init(struct ivpu_device *vdev);
void ivpu_hw_btrs_freq_ratios_init(struct ivpu_device *vdev);
int ivpu_hw_btrs_irqs_clear_with_0_mtl(struct ivpu_device *vdev);
int ivpu_hw_btrs_wp_drive(struct ivpu_device *vdev, bool enable);
int ivpu_hw_btrs_wait_for_clock_res_own_ack(struct ivpu_device *vdev);
int ivpu_hw_btrs_d0i3_enable(struct ivpu_device *vdev);
int ivpu_hw_btrs_d0i3_disable(struct ivpu_device *vdev);
void ivpu_hw_btrs_set_port_arbitration_weights_lnl(struct ivpu_device *vdev);
bool ivpu_hw_btrs_is_idle(struct ivpu_device *vdev);
int ivpu_hw_btrs_wait_for_idle(struct ivpu_device *vdev);
int ivpu_hw_btrs_ip_reset(struct ivpu_device *vdev);
void ivpu_hw_btrs_profiling_freq_reg_set_lnl(struct ivpu_device *vdev);
void ivpu_hw_btrs_ats_print_lnl(struct ivpu_device *vdev);
void ivpu_hw_btrs_clock_relinquish_disable_lnl(struct ivpu_device *vdev);
bool ivpu_hw_btrs_irq_handler_mtl(struct ivpu_device *vdev, int irq);
bool ivpu_hw_btrs_irq_handler_lnl(struct ivpu_device *vdev, int irq);
void ivpu_hw_btrs_dct_drive(struct ivpu_device *vdev, u32 dct_val);
u32 ivpu_hw_btrs_pll_freq_get(struct ivpu_device *vdev);
u32 ivpu_hw_btrs_ratio_to_freq(struct ivpu_device *vdev, u32 ratio);
u32 ivpu_hw_btrs_telemetry_offset_get(struct ivpu_device *vdev);
u32 ivpu_hw_btrs_telemetry_size_get(struct ivpu_device *vdev);
u32 ivpu_hw_btrs_telemetry_enable_get(struct ivpu_device *vdev);
void ivpu_hw_btrs_global_int_enable(struct ivpu_device *vdev);
void ivpu_hw_btrs_global_int_disable(struct ivpu_device *vdev);
void ivpu_hw_btrs_irq_enable(struct ivpu_device *vdev);
void ivpu_hw_btrs_irq_disable(struct ivpu_device *vdev);
void ivpu_hw_btrs_diagnose_failure(struct ivpu_device *vdev);
#endif /* __IVPU_HW_BTRS_H__ */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2020-2024 Intel Corporation
*/
#ifndef __IVPU_HW_BTRS_LNL_REG_H__
#define __IVPU_HW_BTRS_LNL_REG_H__
#include <linux/bits.h>
#define VPU_HW_BTRS_LNL_INTERRUPT_STAT 0x00000000u
#define VPU_HW_BTRS_LNL_INTERRUPT_STAT_FREQ_CHANGE_MASK BIT_MASK(0)
#define VPU_HW_BTRS_LNL_INTERRUPT_STAT_ATS_ERR_MASK BIT_MASK(1)
#define VPU_HW_BTRS_LNL_INTERRUPT_STAT_CFI0_ERR_MASK BIT_MASK(2)
#define VPU_HW_BTRS_LNL_INTERRUPT_STAT_CFI1_ERR_MASK BIT_MASK(3)
#define VPU_HW_BTRS_LNL_INTERRUPT_STAT_IMR0_ERR_MASK BIT_MASK(4)
#define VPU_HW_BTRS_LNL_INTERRUPT_STAT_IMR1_ERR_MASK BIT_MASK(5)
#define VPU_HW_BTRS_LNL_INTERRUPT_STAT_SURV_ERR_MASK BIT_MASK(6)
#define VPU_HW_BTRS_LNL_LOCAL_INT_MASK 0x00000004u
#define VPU_HW_BTRS_LNL_GLOBAL_INT_MASK 0x00000008u
#define VPU_HW_BTRS_LNL_HM_ATS 0x0000000cu
#define VPU_HW_BTRS_LNL_ATS_ERR_LOG1 0x00000010u
#define VPU_HW_BTRS_LNL_ATS_ERR_LOG2 0x00000014u
#define VPU_HW_BTRS_LNL_ATS_ERR_CLEAR 0x00000018u
#define VPU_HW_BTRS_LNL_CFI0_ERR_LOG 0x0000001cu
#define VPU_HW_BTRS_LNL_CFI0_ERR_CLEAR 0x00000020u
#define VPU_HW_BTRS_LNL_PORT_ARBITRATION_WEIGHTS_ATS 0x00000024u
#define VPU_HW_BTRS_LNL_CFI1_ERR_LOG 0x00000040u
#define VPU_HW_BTRS_LNL_CFI1_ERR_CLEAR 0x00000044u
#define VPU_HW_BTRS_LNL_IMR_ERR_CFI0_LOW 0x00000048u
#define VPU_HW_BTRS_LNL_IMR_ERR_CFI0_HIGH 0x0000004cu
#define VPU_HW_BTRS_LNL_IMR_ERR_CFI0_CLEAR 0x00000050u
#define VPU_HW_BTRS_LNL_PORT_ARBITRATION_WEIGHTS 0x00000054u
#define VPU_HW_BTRS_LNL_IMR_ERR_CFI1_LOW 0x00000058u
#define VPU_HW_BTRS_LNL_IMR_ERR_CFI1_HIGH 0x0000005cu
#define VPU_HW_BTRS_LNL_IMR_ERR_CFI1_CLEAR 0x00000060u
#define VPU_HW_BTRS_LNL_PCODE_MAILBOX 0x00000070u
#define VPU_HW_BTRS_LNL_PCODE_MAILBOX_CMD_MASK GENMASK(7, 0)
#define VPU_HW_BTRS_LNL_PCODE_MAILBOX_PARAM1_MASK GENMASK(15, 8)
#define VPU_HW_BTRS_LNL_PCODE_MAILBOX_PARAM2_MASK GENMASK(23, 16)
#define VPU_HW_BTRS_LNL_PCODE_MAILBOX_PARAM3_MASK GENMASK(31, 24)
#define VPU_HW_BTRS_LNL_PCODE_MAILBOX_SHADOW 0x00000074u
#define VPU_HW_BTRS_LNL_PCODE_MAILBOX_SHADOW_CMD_MASK GENMASK(7, 0)
#define VPU_HW_BTRS_LNL_PCODE_MAILBOX_SHADOW_PARAM1_MASK GENMASK(15, 8)
#define VPU_HW_BTRS_LNL_PCODE_MAILBOX_SHADOW_PARAM2_MASK GENMASK(23, 16)
#define VPU_HW_BTRS_LNL_PCODE_MAILBOX_SHADOW_PARAM3_MASK GENMASK(31, 24)
#define VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD0 0x00000130u
#define VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD0_MIN_RATIO_MASK GENMASK(15, 0)
#define VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD0_MAX_RATIO_MASK GENMASK(31, 16)
#define VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD1 0x00000134u
#define VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD1_TARGET_RATIO_MASK GENMASK(15, 0)
#define VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD1_EPP_MASK GENMASK(31, 16)
#define VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD2 0x00000138u
#define VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD2_CONFIG_MASK GENMASK(15, 0)
#define VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD2_CDYN_MASK GENMASK(31, 16)
#define VPU_HW_BTRS_LNL_WP_REQ_CMD 0x0000013cu
#define VPU_HW_BTRS_LNL_WP_REQ_CMD_SEND_MASK BIT_MASK(0)
#define VPU_HW_BTRS_LNL_PLL_FREQ 0x00000148u
#define VPU_HW_BTRS_LNL_PLL_FREQ_RATIO_MASK GENMASK(15, 0)
#define VPU_HW_BTRS_LNL_TILE_FUSE 0x00000150u
#define VPU_HW_BTRS_LNL_TILE_FUSE_VALID_MASK BIT_MASK(0)
#define VPU_HW_BTRS_LNL_TILE_FUSE_CONFIG_MASK GENMASK(6, 1)
#define VPU_HW_BTRS_LNL_VPU_STATUS 0x00000154u
#define VPU_HW_BTRS_LNL_VPU_STATUS_READY_MASK BIT_MASK(0)
#define VPU_HW_BTRS_LNL_VPU_STATUS_IDLE_MASK BIT_MASK(1)
#define VPU_HW_BTRS_LNL_VPU_STATUS_DUP_IDLE_MASK BIT_MASK(2)
#define VPU_HW_BTRS_LNL_VPU_STATUS_CLOCK_RESOURCE_OWN_ACK_MASK BIT_MASK(6)
#define VPU_HW_BTRS_LNL_VPU_STATUS_POWER_RESOURCE_OWN_ACK_MASK BIT_MASK(7)
#define VPU_HW_BTRS_LNL_VPU_STATUS_PERF_CLK_MASK BIT_MASK(11)
#define VPU_HW_BTRS_LNL_VPU_STATUS_DISABLE_CLK_RELINQUISH_MASK BIT_MASK(12)
#define VPU_HW_BTRS_LNL_IP_RESET 0x00000160u
#define VPU_HW_BTRS_LNL_IP_RESET_TRIGGER_MASK BIT_MASK(0)
#define VPU_HW_BTRS_LNL_D0I3_CONTROL 0x00000164u
#define VPU_HW_BTRS_LNL_D0I3_CONTROL_INPROGRESS_MASK BIT_MASK(0)
#define VPU_HW_BTRS_LNL_D0I3_CONTROL_I3_MASK BIT_MASK(2)
#define VPU_HW_BTRS_LNL_VPU_TELEMETRY_OFFSET 0x00000168u
#define VPU_HW_BTRS_LNL_VPU_TELEMETRY_SIZE 0x0000016cu
#define VPU_HW_BTRS_LNL_VPU_TELEMETRY_ENABLE 0x00000170u
#define VPU_HW_BTRS_LNL_FMIN_FUSE 0x00000174u
#define VPU_HW_BTRS_LNL_FMIN_FUSE_MIN_RATIO_MASK GENMASK(7, 0)
#define VPU_HW_BTRS_LNL_FMIN_FUSE_PN_RATIO_MASK GENMASK(15, 8)
#define VPU_HW_BTRS_LNL_FMAX_FUSE 0x00000178u
#define VPU_HW_BTRS_LNL_FMAX_FUSE_MAX_RATIO_MASK GENMASK(7, 0)
#endif /* __IVPU_HW_BTRS_LNL_REG_H__ */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2020-2023 Intel Corporation
*/
#ifndef __IVPU_HW_BTRS_MTL_REG_H__
#define __IVPU_HW_BTRS_MTL_REG_H__
#include <linux/bits.h>
#define VPU_HW_BTRS_MTL_INTERRUPT_TYPE 0x00000000u
#define VPU_HW_BTRS_MTL_INTERRUPT_STAT 0x00000004u
#define VPU_HW_BTRS_MTL_INTERRUPT_STAT_FREQ_CHANGE_MASK BIT_MASK(0)
#define VPU_HW_BTRS_MTL_INTERRUPT_STAT_ATS_ERR_MASK BIT_MASK(1)
#define VPU_HW_BTRS_MTL_INTERRUPT_STAT_UFI_ERR_MASK BIT_MASK(2)
#define VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD0 0x00000008u
#define VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD0_MIN_RATIO_MASK GENMASK(15, 0)
#define VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD0_MAX_RATIO_MASK GENMASK(31, 16)
#define VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD1 0x0000000cu
#define VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD1_TARGET_RATIO_MASK GENMASK(15, 0)
#define VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD1_EPP_MASK GENMASK(31, 16)
#define VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD2 0x00000010u
#define VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD2_CONFIG_MASK GENMASK(15, 0)
#define VPU_HW_BTRS_MTL_WP_REQ_CMD 0x00000014u
#define VPU_HW_BTRS_MTL_WP_REQ_CMD_SEND_MASK BIT_MASK(0)
#define VPU_HW_BTRS_MTL_WP_DOWNLOAD 0x00000018u
#define VPU_HW_BTRS_MTL_WP_DOWNLOAD_TARGET_RATIO_MASK GENMASK(15, 0)
#define VPU_HW_BTRS_MTL_CURRENT_PLL 0x0000001cu
#define VPU_HW_BTRS_MTL_CURRENT_PLL_RATIO_MASK GENMASK(15, 0)
#define VPU_HW_BTRS_MTL_PLL_ENABLE 0x00000020u
#define VPU_HW_BTRS_MTL_FMIN_FUSE 0x00000024u
#define VPU_HW_BTRS_MTL_FMIN_FUSE_MIN_RATIO_MASK GENMASK(7, 0)
#define VPU_HW_BTRS_MTL_FMIN_FUSE_PN_RATIO_MASK GENMASK(15, 8)
#define VPU_HW_BTRS_MTL_FMAX_FUSE 0x00000028u
#define VPU_HW_BTRS_MTL_FMAX_FUSE_MAX_RATIO_MASK GENMASK(7, 0)
#define VPU_HW_BTRS_MTL_TILE_FUSE 0x0000002cu
#define VPU_HW_BTRS_MTL_TILE_FUSE_VALID_MASK BIT_MASK(0)
#define VPU_HW_BTRS_MTL_TILE_FUSE_SKU_MASK GENMASK(3, 2)
#define VPU_HW_BTRS_MTL_LOCAL_INT_MASK 0x00000030u
#define VPU_HW_BTRS_MTL_GLOBAL_INT_MASK 0x00000034u
#define VPU_HW_BTRS_MTL_PLL_STATUS 0x00000040u
#define VPU_HW_BTRS_MTL_PLL_STATUS_LOCK_MASK BIT_MASK(1)
#define VPU_HW_BTRS_MTL_VPU_STATUS 0x00000044u
#define VPU_HW_BTRS_MTL_VPU_STATUS_READY_MASK BIT_MASK(0)
#define VPU_HW_BTRS_MTL_VPU_STATUS_IDLE_MASK BIT_MASK(1)
#define VPU_HW_BTRS_MTL_VPU_D0I3_CONTROL 0x00000060u
#define VPU_HW_BTRS_MTL_VPU_D0I3_CONTROL_INPROGRESS_MASK BIT_MASK(0)
#define VPU_HW_BTRS_MTL_VPU_D0I3_CONTROL_I3_MASK BIT_MASK(2)
#define VPU_HW_BTRS_MTL_VPU_IP_RESET 0x00000050u
#define VPU_HW_BTRS_MTL_VPU_IP_RESET_TRIGGER_MASK BIT_MASK(0)
#define VPU_HW_BTRS_MTL_VPU_TELEMETRY_OFFSET 0x00000080u
#define VPU_HW_BTRS_MTL_VPU_TELEMETRY_SIZE 0x00000084u
#define VPU_HW_BTRS_MTL_VPU_TELEMETRY_ENABLE 0x00000088u
#define VPU_HW_BTRS_MTL_ATS_ERR_LOG_0 0x000000a0u
#define VPU_HW_BTRS_MTL_ATS_ERR_LOG_1 0x000000a4u
#define VPU_HW_BTRS_MTL_ATS_ERR_CLEAR 0x000000a8u
#define VPU_HW_BTRS_MTL_UFI_ERR_LOG 0x000000b0u
#define VPU_HW_BTRS_MTL_UFI_ERR_LOG_CQ_ID_MASK GENMASK(11, 0)
#define VPU_HW_BTRS_MTL_UFI_ERR_LOG_AXI_ID_MASK GENMASK(19, 12)
#define VPU_HW_BTRS_MTL_UFI_ERR_LOG_OPCODE_MASK GENMASK(24, 20)
#define VPU_HW_BTRS_MTL_UFI_ERR_CLEAR 0x000000b4u
#endif /* __IVPU_HW_BTRS_MTL_REG_H__ */
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2020-2024 Intel Corporation
*/
#ifndef __IVPU_HW_IP_H__
#define __IVPU_HW_IP_H__
#include "ivpu_drv.h"
int ivpu_hw_ip_host_ss_configure(struct ivpu_device *vdev);
void ivpu_hw_ip_idle_gen_enable(struct ivpu_device *vdev);
void ivpu_hw_ip_idle_gen_disable(struct ivpu_device *vdev);
int ivpu_hw_ip_pwr_domain_enable(struct ivpu_device *vdev);
int ivpu_hw_ip_host_ss_axi_enable(struct ivpu_device *vdev);
int ivpu_hw_ip_top_noc_enable(struct ivpu_device *vdev);
u64 ivpu_hw_ip_read_perf_timer_counter(struct ivpu_device *vdev);
void ivpu_hw_ip_snoop_disable(struct ivpu_device *vdev);
void ivpu_hw_ip_tbu_mmu_enable(struct ivpu_device *vdev);
int ivpu_hw_ip_soc_cpu_boot(struct ivpu_device *vdev);
void ivpu_hw_ip_wdt_disable(struct ivpu_device *vdev);
void ivpu_hw_ip_diagnose_failure(struct ivpu_device *vdev);
u32 ivpu_hw_ip_ipc_rx_count_get(struct ivpu_device *vdev);
void ivpu_hw_ip_irq_clear(struct ivpu_device *vdev);
bool ivpu_hw_ip_irq_handler_37xx(struct ivpu_device *vdev, int irq);
bool ivpu_hw_ip_irq_handler_40xx(struct ivpu_device *vdev, int irq);
void ivpu_hw_ip_db_set(struct ivpu_device *vdev, u32 db_id);
u32 ivpu_hw_ip_ipc_rx_addr_get(struct ivpu_device *vdev);
void ivpu_hw_ip_ipc_tx_set(struct ivpu_device *vdev, u32 vpu_addr);
void ivpu_hw_ip_irq_enable(struct ivpu_device *vdev);
void ivpu_hw_ip_irq_disable(struct ivpu_device *vdev);
void ivpu_hw_ip_diagnose_failure(struct ivpu_device *vdev);
void ivpu_hw_ip_fabric_req_override_enable_50xx(struct ivpu_device *vdev);
void ivpu_hw_ip_fabric_req_override_disable_50xx(struct ivpu_device *vdev);
#endif /* __IVPU_HW_IP_H__ */
......@@ -129,7 +129,7 @@ static void ivpu_ipc_tx_release(struct ivpu_device *vdev, u32 vpu_addr)
static void ivpu_ipc_tx(struct ivpu_device *vdev, u32 vpu_addr)
{
ivpu_hw_reg_ipc_tx_set(vdev, vpu_addr);
ivpu_hw_ipc_tx_set(vdev, vpu_addr);
}
static void
......@@ -378,7 +378,7 @@ ivpu_ipc_match_consumer(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons
return false;
}
void ivpu_ipc_irq_handler(struct ivpu_device *vdev, bool *wake_thread)
void ivpu_ipc_irq_handler(struct ivpu_device *vdev)
{
struct ivpu_ipc_info *ipc = vdev->ipc;
struct ivpu_ipc_consumer *cons;
......@@ -392,8 +392,8 @@ void ivpu_ipc_irq_handler(struct ivpu_device *vdev, bool *wake_thread)
* Driver needs to purge all messages from IPC FIFO to clear IPC interrupt.
* Without purge IPC FIFO to 0 next IPC interrupts won't be generated.
*/
while (ivpu_hw_reg_ipc_rx_count_get(vdev)) {
vpu_addr = ivpu_hw_reg_ipc_rx_addr_get(vdev);
while (ivpu_hw_ipc_rx_count_get(vdev)) {
vpu_addr = ivpu_hw_ipc_rx_addr_get(vdev);
if (vpu_addr == REG_IO_ERROR) {
ivpu_err_ratelimited(vdev, "Failed to read IPC rx addr register\n");
return;
......@@ -442,11 +442,12 @@ void ivpu_ipc_irq_handler(struct ivpu_device *vdev, bool *wake_thread)
}
}
if (wake_thread)
*wake_thread = !list_empty(&ipc->cb_msg_list);
if (!list_empty(&ipc->cb_msg_list))
if (!kfifo_put(&vdev->hw->irq.fifo, IVPU_HW_IRQ_SRC_IPC))
ivpu_err_ratelimited(vdev, "IRQ FIFO full\n");
}
irqreturn_t ivpu_ipc_irq_thread_handler(struct ivpu_device *vdev)
void ivpu_ipc_irq_thread_handler(struct ivpu_device *vdev)
{
struct ivpu_ipc_info *ipc = vdev->ipc;
struct ivpu_ipc_rx_msg *rx_msg, *r;
......@@ -462,8 +463,6 @@ irqreturn_t ivpu_ipc_irq_thread_handler(struct ivpu_device *vdev)
rx_msg->callback(vdev, rx_msg->ipc_hdr, rx_msg->jsm_msg);
ivpu_ipc_rx_msg_del(vdev, rx_msg);
}
return IRQ_HANDLED;
}
int ivpu_ipc_init(struct ivpu_device *vdev)
......
......@@ -89,8 +89,8 @@ void ivpu_ipc_enable(struct ivpu_device *vdev);
void ivpu_ipc_disable(struct ivpu_device *vdev);
void ivpu_ipc_reset(struct ivpu_device *vdev);
void ivpu_ipc_irq_handler(struct ivpu_device *vdev, bool *wake_thread);
irqreturn_t ivpu_ipc_irq_thread_handler(struct ivpu_device *vdev);
void ivpu_ipc_irq_handler(struct ivpu_device *vdev);
void ivpu_ipc_irq_thread_handler(struct ivpu_device *vdev);
void ivpu_ipc_consumer_add(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
u32 channel, ivpu_ipc_rx_callback_t callback);
......
This diff is collapsed.
......@@ -24,6 +24,8 @@ struct ivpu_file_priv;
*/
struct ivpu_cmdq {
struct vpu_job_queue *jobq;
struct ivpu_bo *primary_preempt_buf;
struct ivpu_bo *secondary_preempt_buf;
struct ivpu_bo *mem;
u32 entry_count;
u32 db_id;
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2020-2023 Intel Corporation
* Copyright (C) 2020-2024 Intel Corporation
*/
#include "ivpu_drv.h"
......@@ -281,3 +281,260 @@ int ivpu_jsm_pwr_d0i3_enter(struct ivpu_device *vdev)
return ivpu_hw_wait_for_idle(vdev);
}
int ivpu_jsm_hws_create_cmdq(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_group, u32 cmdq_id,
u32 pid, u32 engine, u64 cmdq_base, u32 cmdq_size)
{
struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_CREATE_CMD_QUEUE };
struct vpu_jsm_msg resp;
int ret;
req.payload.hws_create_cmdq.host_ssid = ctx_id;
req.payload.hws_create_cmdq.process_id = pid;
req.payload.hws_create_cmdq.engine_idx = engine;
req.payload.hws_create_cmdq.cmdq_group = cmdq_group;
req.payload.hws_create_cmdq.cmdq_id = cmdq_id;
req.payload.hws_create_cmdq.cmdq_base = cmdq_base;
req.payload.hws_create_cmdq.cmdq_size = cmdq_size;
ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_CREATE_CMD_QUEUE_RSP, &resp,
VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
if (ret)
ivpu_warn_ratelimited(vdev, "Failed to create command queue: %d\n", ret);
return ret;
}
int ivpu_jsm_hws_destroy_cmdq(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_id)
{
struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_DESTROY_CMD_QUEUE };
struct vpu_jsm_msg resp;
int ret;
req.payload.hws_destroy_cmdq.host_ssid = ctx_id;
req.payload.hws_destroy_cmdq.cmdq_id = cmdq_id;
ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_DESTROY_CMD_QUEUE_RSP, &resp,
VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
if (ret)
ivpu_warn_ratelimited(vdev, "Failed to destroy command queue: %d\n", ret);
return ret;
}
int ivpu_jsm_hws_register_db(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_id, u32 db_id,
u64 cmdq_base, u32 cmdq_size)
{
struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_HWS_REGISTER_DB };
struct vpu_jsm_msg resp;
int ret = 0;
req.payload.hws_register_db.db_id = db_id;
req.payload.hws_register_db.host_ssid = ctx_id;
req.payload.hws_register_db.cmdq_id = cmdq_id;
req.payload.hws_register_db.cmdq_base = cmdq_base;
req.payload.hws_register_db.cmdq_size = cmdq_size;
ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_REGISTER_DB_DONE, &resp,
VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
if (ret)
ivpu_err_ratelimited(vdev, "Failed to register doorbell %u: %d\n", db_id, ret);
return ret;
}
int ivpu_jsm_hws_resume_engine(struct ivpu_device *vdev, u32 engine)
{
struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_HWS_ENGINE_RESUME };
struct vpu_jsm_msg resp;
int ret;
if (engine >= VPU_ENGINE_NB)
return -EINVAL;
req.payload.hws_resume_engine.engine_idx = engine;
ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_HWS_RESUME_ENGINE_DONE, &resp,
VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
if (ret)
ivpu_err_ratelimited(vdev, "Failed to resume engine %d: %d\n", engine, ret);
return ret;
}
int ivpu_jsm_hws_set_context_sched_properties(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_id,
u32 priority)
{
struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_SET_CONTEXT_SCHED_PROPERTIES };
struct vpu_jsm_msg resp;
int ret;
req.payload.hws_set_context_sched_properties.host_ssid = ctx_id;
req.payload.hws_set_context_sched_properties.cmdq_id = cmdq_id;
req.payload.hws_set_context_sched_properties.priority_band = priority;
req.payload.hws_set_context_sched_properties.realtime_priority_level = 0;
req.payload.hws_set_context_sched_properties.in_process_priority = 0;
req.payload.hws_set_context_sched_properties.context_quantum = 20000;
req.payload.hws_set_context_sched_properties.grace_period_same_priority = 10000;
req.payload.hws_set_context_sched_properties.grace_period_lower_priority = 0;
ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_SET_CONTEXT_SCHED_PROPERTIES_RSP, &resp,
VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
if (ret)
ivpu_warn_ratelimited(vdev, "Failed to set context sched properties: %d\n", ret);
return ret;
}
int ivpu_jsm_hws_set_scheduling_log(struct ivpu_device *vdev, u32 engine_idx, u32 host_ssid,
u64 vpu_log_buffer_va)
{
struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_HWS_SET_SCHEDULING_LOG };
struct vpu_jsm_msg resp;
int ret;
req.payload.hws_set_scheduling_log.engine_idx = engine_idx;
req.payload.hws_set_scheduling_log.host_ssid = host_ssid;
req.payload.hws_set_scheduling_log.vpu_log_buffer_va = vpu_log_buffer_va;
req.payload.hws_set_scheduling_log.notify_index = 0;
req.payload.hws_set_scheduling_log.enable_extra_events =
ivpu_test_mode & IVPU_TEST_MODE_HWS_EXTRA_EVENTS;
ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_HWS_SET_SCHEDULING_LOG_RSP, &resp,
VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
if (ret)
ivpu_warn_ratelimited(vdev, "Failed to set scheduling log: %d\n", ret);
return ret;
}
int ivpu_jsm_hws_setup_priority_bands(struct ivpu_device *vdev)
{
struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_SET_PRIORITY_BAND_SETUP };
struct vpu_jsm_msg resp;
int ret;
/* Idle */
req.payload.hws_priority_band_setup.grace_period[0] = 0;
req.payload.hws_priority_band_setup.process_grace_period[0] = 50000;
req.payload.hws_priority_band_setup.process_quantum[0] = 160000;
/* Normal */
req.payload.hws_priority_band_setup.grace_period[1] = 50000;
req.payload.hws_priority_band_setup.process_grace_period[1] = 50000;
req.payload.hws_priority_band_setup.process_quantum[1] = 300000;
/* Focus */
req.payload.hws_priority_band_setup.grace_period[2] = 50000;
req.payload.hws_priority_band_setup.process_grace_period[2] = 50000;
req.payload.hws_priority_band_setup.process_quantum[2] = 200000;
/* Realtime */
req.payload.hws_priority_band_setup.grace_period[3] = 0;
req.payload.hws_priority_band_setup.process_grace_period[3] = 50000;
req.payload.hws_priority_band_setup.process_quantum[3] = 200000;
req.payload.hws_priority_band_setup.normal_band_percentage = 10;
ret = ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_SET_PRIORITY_BAND_SETUP_RSP,
&resp, VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
if (ret)
ivpu_warn_ratelimited(vdev, "Failed to set priority bands: %d\n", ret);
return ret;
}
int ivpu_jsm_metric_streamer_start(struct ivpu_device *vdev, u64 metric_group_mask,
u64 sampling_rate, u64 buffer_addr, u64 buffer_size)
{
struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_METRIC_STREAMER_START };
struct vpu_jsm_msg resp;
int ret;
req.payload.metric_streamer_start.metric_group_mask = metric_group_mask;
req.payload.metric_streamer_start.sampling_rate = sampling_rate;
req.payload.metric_streamer_start.buffer_addr = buffer_addr;
req.payload.metric_streamer_start.buffer_size = buffer_size;
ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_METRIC_STREAMER_START_DONE, &resp,
VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
if (ret) {
ivpu_warn_ratelimited(vdev, "Failed to start metric streamer: ret %d\n", ret);
return ret;
}
return ret;
}
int ivpu_jsm_metric_streamer_stop(struct ivpu_device *vdev, u64 metric_group_mask)
{
struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_METRIC_STREAMER_STOP };
struct vpu_jsm_msg resp;
int ret;
req.payload.metric_streamer_stop.metric_group_mask = metric_group_mask;
ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_METRIC_STREAMER_STOP_DONE, &resp,
VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
if (ret)
ivpu_warn_ratelimited(vdev, "Failed to stop metric streamer: ret %d\n", ret);
return ret;
}
int ivpu_jsm_metric_streamer_update(struct ivpu_device *vdev, u64 metric_group_mask,
u64 buffer_addr, u64 buffer_size, u64 *bytes_written)
{
struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_METRIC_STREAMER_UPDATE };
struct vpu_jsm_msg resp;
int ret;
req.payload.metric_streamer_update.metric_group_mask = metric_group_mask;
req.payload.metric_streamer_update.buffer_addr = buffer_addr;
req.payload.metric_streamer_update.buffer_size = buffer_size;
ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_METRIC_STREAMER_UPDATE_DONE, &resp,
VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
if (ret) {
ivpu_warn_ratelimited(vdev, "Failed to update metric streamer: ret %d\n", ret);
return ret;
}
if (buffer_size && resp.payload.metric_streamer_done.bytes_written > buffer_size) {
ivpu_warn_ratelimited(vdev, "MS buffer overflow: bytes_written %#llx > buffer_size %#llx\n",
resp.payload.metric_streamer_done.bytes_written, buffer_size);
return -EOVERFLOW;
}
*bytes_written = resp.payload.metric_streamer_done.bytes_written;
return ret;
}
int ivpu_jsm_metric_streamer_info(struct ivpu_device *vdev, u64 metric_group_mask, u64 buffer_addr,
u64 buffer_size, u32 *sample_size, u64 *info_size)
{
struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_METRIC_STREAMER_INFO };
struct vpu_jsm_msg resp;
int ret;
req.payload.metric_streamer_start.metric_group_mask = metric_group_mask;
req.payload.metric_streamer_start.buffer_addr = buffer_addr;
req.payload.metric_streamer_start.buffer_size = buffer_size;
ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_METRIC_STREAMER_INFO_DONE, &resp,
VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
if (ret) {
ivpu_warn_ratelimited(vdev, "Failed to get metric streamer info: ret %d\n", ret);
return ret;
}
if (!resp.payload.metric_streamer_done.sample_size) {
ivpu_warn_ratelimited(vdev, "Invalid sample size\n");
return -EBADMSG;
}
if (sample_size)
*sample_size = resp.payload.metric_streamer_done.sample_size;
if (info_size)
*info_size = resp.payload.metric_streamer_done.bytes_written;
return ret;
}
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2020-2023 Intel Corporation
* Copyright (C) 2020-2024 Intel Corporation
*/
#ifndef __IVPU_JSM_MSG_H__
......@@ -23,4 +23,22 @@ int ivpu_jsm_trace_set_config(struct ivpu_device *vdev, u32 trace_level, u32 tra
u64 trace_hw_component_mask);
int ivpu_jsm_context_release(struct ivpu_device *vdev, u32 host_ssid);
int ivpu_jsm_pwr_d0i3_enter(struct ivpu_device *vdev);
int ivpu_jsm_hws_create_cmdq(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_group, u32 cmdq_id,
u32 pid, u32 engine, u64 cmdq_base, u32 cmdq_size);
int ivpu_jsm_hws_destroy_cmdq(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_id);
int ivpu_jsm_hws_register_db(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_id, u32 db_id,
u64 cmdq_base, u32 cmdq_size);
int ivpu_jsm_hws_resume_engine(struct ivpu_device *vdev, u32 engine);
int ivpu_jsm_hws_set_context_sched_properties(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_id,
u32 priority);
int ivpu_jsm_hws_set_scheduling_log(struct ivpu_device *vdev, u32 engine_idx, u32 host_ssid,
u64 vpu_log_buffer_va);
int ivpu_jsm_hws_setup_priority_bands(struct ivpu_device *vdev);
int ivpu_jsm_metric_streamer_start(struct ivpu_device *vdev, u64 metric_group_mask,
u64 sampling_rate, u64 buffer_addr, u64 buffer_size);
int ivpu_jsm_metric_streamer_stop(struct ivpu_device *vdev, u64 metric_group_mask);
int ivpu_jsm_metric_streamer_update(struct ivpu_device *vdev, u64 metric_group_mask,
u64 buffer_addr, u64 buffer_size, u64 *bytes_written);
int ivpu_jsm_metric_streamer_info(struct ivpu_device *vdev, u64 metric_group_mask, u64 buffer_addr,
u64 buffer_size, u32 *sample_size, u64 *info_size);
#endif
......@@ -519,7 +519,8 @@ static int ivpu_mmu_cmdq_sync(struct ivpu_device *vdev)
if (ret)
return ret;
clflush_cache_range(q->base, IVPU_MMU_CMDQ_SIZE);
if (!ivpu_is_force_snoop_enabled(vdev))
clflush_cache_range(q->base, IVPU_MMU_CMDQ_SIZE);
REGV_WR32(IVPU_MMU_REG_CMDQ_PROD, q->prod);
ret = ivpu_mmu_cmdq_wait_for_cons(vdev);
......@@ -567,7 +568,8 @@ static int ivpu_mmu_reset(struct ivpu_device *vdev)
int ret;
memset(mmu->cmdq.base, 0, IVPU_MMU_CMDQ_SIZE);
clflush_cache_range(mmu->cmdq.base, IVPU_MMU_CMDQ_SIZE);
if (!ivpu_is_force_snoop_enabled(vdev))
clflush_cache_range(mmu->cmdq.base, IVPU_MMU_CMDQ_SIZE);
mmu->cmdq.prod = 0;
mmu->cmdq.cons = 0;
......@@ -661,7 +663,8 @@ static void ivpu_mmu_strtab_link_cd(struct ivpu_device *vdev, u32 sid)
WRITE_ONCE(entry[1], str[1]);
WRITE_ONCE(entry[0], str[0]);
clflush_cache_range(entry, IVPU_MMU_STRTAB_ENT_SIZE);
if (!ivpu_is_force_snoop_enabled(vdev))
clflush_cache_range(entry, IVPU_MMU_STRTAB_ENT_SIZE);
ivpu_dbg(vdev, MMU, "STRTAB write entry (SSID=%u): 0x%llx, 0x%llx\n", sid, str[0], str[1]);
}
......@@ -735,7 +738,8 @@ static int ivpu_mmu_cd_add(struct ivpu_device *vdev, u32 ssid, u64 cd_dma)
WRITE_ONCE(entry[3], cd[3]);
WRITE_ONCE(entry[0], cd[0]);
clflush_cache_range(entry, IVPU_MMU_CDTAB_ENT_SIZE);
if (!ivpu_is_force_snoop_enabled(vdev))
clflush_cache_range(entry, IVPU_MMU_CDTAB_ENT_SIZE);
ivpu_dbg(vdev, MMU, "CDTAB %s entry (SSID=%u, dma=%pad): 0x%llx, 0x%llx, 0x%llx, 0x%llx\n",
cd_dma ? "write" : "clear", ssid, &cd_dma, cd[0], cd[1], cd[2], cd[3]);
......
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-only OR MIT */
/*
* Copyright (C) 2020-2024 Intel Corporation
*/
#ifndef __IVPU_MS_H__
#define __IVPU_MS_H__
#include <linux/list.h>
struct drm_device;
struct drm_file;
struct ivpu_bo;
struct ivpu_device;
struct ivpu_file_priv;
struct ivpu_ms_instance {
struct ivpu_bo *bo;
struct list_head ms_instance_node;
u64 mask;
u64 buff_size;
u64 active_buff_vpu_addr;
u64 inactive_buff_vpu_addr;
void *active_buff_ptr;
void *inactive_buff_ptr;
u64 leftover_bytes;
void *leftover_addr;
};
int ivpu_ms_start_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
int ivpu_ms_stop_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
int ivpu_ms_get_data_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
int ivpu_ms_get_info_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
void ivpu_ms_cleanup(struct ivpu_file_priv *file_priv);
void ivpu_ms_cleanup_all(struct ivpu_device *vdev);
#endif /* __IVPU_MS_H__ */
......@@ -18,6 +18,7 @@
#include "ivpu_job.h"
#include "ivpu_jsm_msg.h"
#include "ivpu_mmu.h"
#include "ivpu_ms.h"
#include "ivpu_pm.h"
static bool ivpu_disable_recovery;
......@@ -131,6 +132,7 @@ static void ivpu_pm_recovery_work(struct work_struct *work)
ivpu_suspend(vdev);
ivpu_pm_prepare_cold_boot(vdev);
ivpu_jobs_abort_all(vdev);
ivpu_ms_cleanup_all(vdev);
ret = ivpu_resume(vdev);
if (ret)
......@@ -262,6 +264,7 @@ int ivpu_pm_runtime_suspend_cb(struct device *dev)
if (!hw_is_idle) {
ivpu_err(vdev, "NPU failed to enter idle, force suspended.\n");
atomic_inc(&vdev->pm->reset_counter);
ivpu_fw_log_dump(vdev);
ivpu_pm_prepare_cold_boot(vdev);
} else {
......@@ -333,6 +336,8 @@ void ivpu_pm_reset_prepare_cb(struct pci_dev *pdev)
ivpu_hw_reset(vdev);
ivpu_pm_prepare_cold_boot(vdev);
ivpu_jobs_abort_all(vdev);
ivpu_ms_cleanup_all(vdev);
ivpu_dbg(vdev, PM, "Pre-reset done.\n");
}
......
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2024 Intel Corporation
*/
#ifndef __IVPU_SYSFS_H__
#define __IVPU_SYSFS_H__
#include "ivpu_drv.h"
void ivpu_sysfs_init(struct ivpu_device *vdev);
#endif /* __IVPU_SYSFS_H__ */
This diff is collapsed.
......@@ -70,7 +70,7 @@ static void dma_fence_array_cb_func(struct dma_fence *f,
static bool dma_fence_array_enable_signaling(struct dma_fence *fence)
{
struct dma_fence_array *array = to_dma_fence_array(fence);
struct dma_fence_array_cb *cb = (void *)(&array[1]);
struct dma_fence_array_cb *cb = array->callbacks;
unsigned i;
for (i = 0; i < array->num_fences; ++i) {
......@@ -168,22 +168,20 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
bool signal_on_any)
{
struct dma_fence_array *array;
size_t size = sizeof(*array);
WARN_ON(!num_fences || !fences);
/* Allocate the callback structures behind the array. */
size += num_fences * sizeof(struct dma_fence_array_cb);
array = kzalloc(size, GFP_KERNEL);
array = kzalloc(struct_size(array, callbacks, num_fences), GFP_KERNEL);
if (!array)
return NULL;
array->num_fences = num_fences;
spin_lock_init(&array->lock);
dma_fence_init(&array->base, &dma_fence_array_ops, &array->lock,
context, seqno);
init_irq_work(&array->work, irq_dma_fence_array_work);
array->num_fences = num_fences;
atomic_set(&array->num_pending, signal_on_any ? 1 : num_fences);
array->fences = fences;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment