Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
L
linux
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
Kirill Smelkov
linux
Commits
68495221
Commit
68495221
authored
Apr 14, 2023
by
Rafael J. Wysocki
Browse files
Options
Browse Files
Download
Plain Diff
Merge back cpufreq changes for 6.4-rc1.
parents
4654e9f9
11fa52fe
Changes
12
Hide whitespace changes
Inline
Side-by-side
Showing
12 changed files
with
320 additions
and
95 deletions
+320
-95
Documentation/admin-guide/kernel-parameters.txt
Documentation/admin-guide/kernel-parameters.txt
+23
-17
Documentation/admin-guide/pm/amd-pstate.rst
Documentation/admin-guide/pm/amd-pstate.rst
+24
-7
drivers/acpi/cppc_acpi.c
drivers/acpi/cppc_acpi.c
+111
-7
drivers/cpufreq/Kconfig.arm
drivers/cpufreq/Kconfig.arm
+1
-1
drivers/cpufreq/amd-pstate.c
drivers/cpufreq/amd-pstate.c
+129
-46
drivers/cpufreq/cpufreq.c
drivers/cpufreq/cpufreq.c
+8
-3
drivers/cpufreq/freq_table.c
drivers/cpufreq/freq_table.c
+6
-1
drivers/cpufreq/intel_pstate.c
drivers/cpufreq/intel_pstate.c
+1
-10
drivers/cpufreq/pmac32-cpufreq.c
drivers/cpufreq/pmac32-cpufreq.c
+3
-3
include/acpi/cppc_acpi.h
include/acpi/cppc_acpi.h
+11
-0
include/linux/amd-pstate.h
include/linux/amd-pstate.h
+2
-0
include/linux/cpufreq.h
include/linux/cpufreq.h
+1
-0
No files found.
Documentation/admin-guide/kernel-parameters.txt
View file @
68495221
...
...
@@ -339,6 +339,29 @@
This mode requires kvm-amd.avic=1.
(Default when IOMMU HW support is present.)
amd_pstate= [X86]
disable
Do not enable amd_pstate as the default
scaling driver for the supported processors
passive
Use amd_pstate with passive mode as a scaling driver.
In this mode autonomous selection is disabled.
Driver requests a desired performance level and platform
tries to match the same performance level if it is
satisfied by guaranteed performance level.
active
Use amd_pstate_epp driver instance as the scaling driver,
driver provides a hint to the hardware if software wants
to bias toward performance (0x0) or energy efficiency (0xff)
to the CPPC firmware. then CPPC power algorithm will
calculate the runtime workload and adjust the realtime cores
frequency.
guided
Activate guided autonomous mode. Driver requests minimum and
maximum performance level and the platform autonomously
selects a performance level in this range and appropriate
to the current workload.
amijoy.map= [HW,JOY] Amiga joystick support
Map of devices attached to JOY0DAT and JOY1DAT
Format: <a>,<b>
...
...
@@ -7059,20 +7082,3 @@
xmon commands.
off xmon is disabled.
amd_pstate= [X86]
disable
Do not enable amd_pstate as the default
scaling driver for the supported processors
passive
Use amd_pstate as a scaling driver, driver requests a
desired performance on this abstract scale and the power
management firmware translates the requests into actual
hardware states (core frequency, data fabric and memory
clocks etc.)
active
Use amd_pstate_epp driver instance as the scaling driver,
driver provides a hint to the hardware if software wants
to bias toward performance (0x0) or energy efficiency (0xff)
to the CPPC firmware. then CPPC power algorithm will
calculate the runtime workload and adjust the realtime cores
frequency.
Documentation/admin-guide/pm/amd-pstate.rst
View file @
68495221
...
...
@@ -303,13 +303,18 @@ efficiency frequency management method on AMD processors.
AMD Pstate Driver Operation Modes
=================================
``amd_pstate`` CPPC has two operation modes: CPPC Autonomous(active) mode and
CPPC non-autonomous(passive) mode.
active mode and passive mode can be chosen by different kernel parameters.
When in Autonomous mode, CPPC ignores requests done in the Desired Performance
Target register and takes into account only the values set to the Minimum requested
performance, Maximum requested performance, and Energy Performance Preference
registers. When Autonomous is disabled, it only considers the Desired Performance Target.
``amd_pstate`` CPPC has 3 operation modes: autonomous (active) mode,
non-autonomous (passive) mode and guided autonomous (guided) mode.
Active/passive/guided mode can be chosen by different kernel parameters.
- In autonomous mode, platform ignores the desired performance level request
and takes into account only the values set to the minimum, maximum and energy
performance preference registers.
- In non-autonomous mode, platform gets desired performance level
from OS directly through Desired Performance Register.
- In guided-autonomous mode, platform sets operating performance level
autonomously according to the current workload and within the limits set by
OS through min and max performance registers.
Active Mode
------------
...
...
@@ -338,6 +343,15 @@ to the Performance Reduction Tolerance register. Above the nominal performance l
processor must provide at least nominal performance requested and go higher if current
operating conditions allow.
Guided Mode
-----------
``amd_pstate=guided``
If ``amd_pstate=guided`` is passed to kernel command line option then this mode
is activated. In this mode, driver requests minimum and maximum performance
level and the platform autonomously selects a performance level in this range
and appropriate to the current workload.
User Space Interface in ``sysfs`` - General
===========================================
...
...
@@ -358,6 +372,9 @@ control its functionality at the system level. They are located in the
"passive"
The driver is functional and in the ``passive mode``
"guided"
The driver is functional and in the ``guided mode``
"disable"
The driver is unregistered and not functional now.
...
...
drivers/acpi/cppc_acpi.c
View file @
68495221
...
...
@@ -1433,6 +1433,102 @@ int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable)
}
EXPORT_SYMBOL_GPL
(
cppc_set_epp_perf
);
/**
* cppc_get_auto_sel_caps - Read autonomous selection register.
* @cpunum : CPU from which to read register.
* @perf_caps : struct where autonomous selection register value is updated.
*/
int
cppc_get_auto_sel_caps
(
int
cpunum
,
struct
cppc_perf_caps
*
perf_caps
)
{
struct
cpc_desc
*
cpc_desc
=
per_cpu
(
cpc_desc_ptr
,
cpunum
);
struct
cpc_register_resource
*
auto_sel_reg
;
u64
auto_sel
;
if
(
!
cpc_desc
)
{
pr_debug
(
"No CPC descriptor for CPU:%d
\n
"
,
cpunum
);
return
-
ENODEV
;
}
auto_sel_reg
=
&
cpc_desc
->
cpc_regs
[
AUTO_SEL_ENABLE
];
if
(
!
CPC_SUPPORTED
(
auto_sel_reg
))
pr_warn_once
(
"Autonomous mode is not unsupported!
\n
"
);
if
(
CPC_IN_PCC
(
auto_sel_reg
))
{
int
pcc_ss_id
=
per_cpu
(
cpu_pcc_subspace_idx
,
cpunum
);
struct
cppc_pcc_data
*
pcc_ss_data
=
NULL
;
int
ret
=
0
;
if
(
pcc_ss_id
<
0
)
return
-
ENODEV
;
pcc_ss_data
=
pcc_data
[
pcc_ss_id
];
down_write
(
&
pcc_ss_data
->
pcc_lock
);
if
(
send_pcc_cmd
(
pcc_ss_id
,
CMD_READ
)
>=
0
)
{
cpc_read
(
cpunum
,
auto_sel_reg
,
&
auto_sel
);
perf_caps
->
auto_sel
=
(
bool
)
auto_sel
;
}
else
{
ret
=
-
EIO
;
}
up_write
(
&
pcc_ss_data
->
pcc_lock
);
return
ret
;
}
return
0
;
}
EXPORT_SYMBOL_GPL
(
cppc_get_auto_sel_caps
);
/**
* cppc_set_auto_sel - Write autonomous selection register.
* @cpu : CPU to which to write register.
* @enable : the desired value of autonomous selection resiter to be updated.
*/
int
cppc_set_auto_sel
(
int
cpu
,
bool
enable
)
{
int
pcc_ss_id
=
per_cpu
(
cpu_pcc_subspace_idx
,
cpu
);
struct
cpc_register_resource
*
auto_sel_reg
;
struct
cpc_desc
*
cpc_desc
=
per_cpu
(
cpc_desc_ptr
,
cpu
);
struct
cppc_pcc_data
*
pcc_ss_data
=
NULL
;
int
ret
=
-
EINVAL
;
if
(
!
cpc_desc
)
{
pr_debug
(
"No CPC descriptor for CPU:%d
\n
"
,
cpu
);
return
-
ENODEV
;
}
auto_sel_reg
=
&
cpc_desc
->
cpc_regs
[
AUTO_SEL_ENABLE
];
if
(
CPC_IN_PCC
(
auto_sel_reg
))
{
if
(
pcc_ss_id
<
0
)
{
pr_debug
(
"Invalid pcc_ss_id
\n
"
);
return
-
ENODEV
;
}
if
(
CPC_SUPPORTED
(
auto_sel_reg
))
{
ret
=
cpc_write
(
cpu
,
auto_sel_reg
,
enable
);
if
(
ret
)
return
ret
;
}
pcc_ss_data
=
pcc_data
[
pcc_ss_id
];
down_write
(
&
pcc_ss_data
->
pcc_lock
);
/* after writing CPC, transfer the ownership of PCC to platform */
ret
=
send_pcc_cmd
(
pcc_ss_id
,
CMD_WRITE
);
up_write
(
&
pcc_ss_data
->
pcc_lock
);
}
else
{
ret
=
-
ENOTSUPP
;
pr_debug
(
"_CPC in PCC is not supported
\n
"
);
}
return
ret
;
}
EXPORT_SYMBOL_GPL
(
cppc_set_auto_sel
);
/**
* cppc_set_enable - Set to enable CPPC on the processor by writing the
* Continuous Performance Control package EnableRegister field.
...
...
@@ -1488,7 +1584,7 @@ EXPORT_SYMBOL_GPL(cppc_set_enable);
int
cppc_set_perf
(
int
cpu
,
struct
cppc_perf_ctrls
*
perf_ctrls
)
{
struct
cpc_desc
*
cpc_desc
=
per_cpu
(
cpc_desc_ptr
,
cpu
);
struct
cpc_register_resource
*
desired_reg
;
struct
cpc_register_resource
*
desired_reg
,
*
min_perf_reg
,
*
max_perf_reg
;
int
pcc_ss_id
=
per_cpu
(
cpu_pcc_subspace_idx
,
cpu
);
struct
cppc_pcc_data
*
pcc_ss_data
=
NULL
;
int
ret
=
0
;
...
...
@@ -1499,6 +1595,8 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
}
desired_reg
=
&
cpc_desc
->
cpc_regs
[
DESIRED_PERF
];
min_perf_reg
=
&
cpc_desc
->
cpc_regs
[
MIN_PERF
];
max_perf_reg
=
&
cpc_desc
->
cpc_regs
[
MAX_PERF
];
/*
* This is Phase-I where we want to write to CPC registers
...
...
@@ -1507,7 +1605,7 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
* Since read_lock can be acquired by multiple CPUs simultaneously we
* achieve that goal here
*/
if
(
CPC_IN_PCC
(
desired_reg
))
{
if
(
CPC_IN_PCC
(
desired_reg
)
||
CPC_IN_PCC
(
min_perf_reg
)
||
CPC_IN_PCC
(
max_perf_reg
)
)
{
if
(
pcc_ss_id
<
0
)
{
pr_debug
(
"Invalid pcc_ss_id
\n
"
);
return
-
ENODEV
;
...
...
@@ -1530,13 +1628,19 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
cpc_desc
->
write_cmd_status
=
0
;
}
cpc_write
(
cpu
,
desired_reg
,
perf_ctrls
->
desired_perf
);
/*
* Skip writing MIN/MAX until Linux knows how to come up with
* useful values.
* Only write if min_perf and max_perf not zero. Some drivers pass zero
* value to min and max perf, but they don't mean to set the zero value,
* they just don't want to write to those registers.
*/
cpc_write
(
cpu
,
desired_reg
,
perf_ctrls
->
desired_perf
);
if
(
perf_ctrls
->
min_perf
)
cpc_write
(
cpu
,
min_perf_reg
,
perf_ctrls
->
min_perf
);
if
(
perf_ctrls
->
max_perf
)
cpc_write
(
cpu
,
max_perf_reg
,
perf_ctrls
->
max_perf
);
if
(
CPC_IN_PCC
(
desired_reg
))
if
(
CPC_IN_PCC
(
desired_reg
)
||
CPC_IN_PCC
(
min_perf_reg
)
||
CPC_IN_PCC
(
max_perf_reg
)
)
up_read
(
&
pcc_ss_data
->
pcc_lock
);
/* END Phase-I */
/*
* This is Phase-II where we transfer the ownership of PCC to Platform
...
...
@@ -1584,7 +1688,7 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
* case during a CMD_READ and if there are pending writes it delivers
* the write command before servicing the read command
*/
if
(
CPC_IN_PCC
(
desired_reg
))
{
if
(
CPC_IN_PCC
(
desired_reg
)
||
CPC_IN_PCC
(
min_perf_reg
)
||
CPC_IN_PCC
(
max_perf_reg
)
)
{
if
(
down_write_trylock
(
&
pcc_ss_data
->
pcc_lock
))
{
/* BEGIN Phase-II */
/* Update only if there are pending write commands */
if
(
pcc_ss_data
->
pending_pcc_write_cmd
)
...
...
drivers/cpufreq/Kconfig.arm
View file @
68495221
...
...
@@ -95,7 +95,7 @@ config ARM_BRCMSTB_AVS_CPUFREQ
help
Some Broadcom STB SoCs use a co-processor running proprietary firmware
("AVS") to handle voltage and frequency scaling. This driver provides
a standard CPUfreq interface to t
o t
he firmware.
a standard CPUfreq interface to the firmware.
Say Y, if you have a Broadcom SoC with AVS support for DFS or DVFS.
...
...
drivers/cpufreq/amd-pstate.c
View file @
68495221
...
...
@@ -106,6 +106,8 @@ static unsigned int epp_values[] = {
[
EPP_INDEX_POWERSAVE
]
=
AMD_CPPC_EPP_POWERSAVE
,
};
typedef
int
(
*
cppc_mode_transition_fn
)(
int
);
static
inline
int
get_mode_idx_from_str
(
const
char
*
str
,
size_t
size
)
{
int
i
;
...
...
@@ -308,7 +310,22 @@ static int cppc_init_perf(struct amd_cpudata *cpudata)
cppc_perf
.
lowest_nonlinear_perf
);
WRITE_ONCE
(
cpudata
->
lowest_perf
,
cppc_perf
.
lowest_perf
);
return
0
;
if
(
cppc_state
==
AMD_PSTATE_ACTIVE
)
return
0
;
ret
=
cppc_get_auto_sel_caps
(
cpudata
->
cpu
,
&
cppc_perf
);
if
(
ret
)
{
pr_warn
(
"failed to get auto_sel, ret: %d
\n
"
,
ret
);
return
0
;
}
ret
=
cppc_set_auto_sel
(
cpudata
->
cpu
,
(
cppc_state
==
AMD_PSTATE_PASSIVE
)
?
0
:
1
);
if
(
ret
)
pr_warn
(
"failed to set auto_sel, ret: %d
\n
"
,
ret
);
return
ret
;
}
DEFINE_STATIC_CALL
(
amd_pstate_init_perf
,
pstate_init_perf
);
...
...
@@ -385,12 +402,18 @@ static inline bool amd_pstate_sample(struct amd_cpudata *cpudata)
}
static
void
amd_pstate_update
(
struct
amd_cpudata
*
cpudata
,
u32
min_perf
,
u32
des_perf
,
u32
max_perf
,
bool
fast_switch
)
u32
des_perf
,
u32
max_perf
,
bool
fast_switch
,
int
gov_flags
)
{
u64
prev
=
READ_ONCE
(
cpudata
->
cppc_req_cached
);
u64
value
=
prev
;
des_perf
=
clamp_t
(
unsigned
long
,
des_perf
,
min_perf
,
max_perf
);
if
((
cppc_state
==
AMD_PSTATE_GUIDED
)
&&
(
gov_flags
&
CPUFREQ_GOV_DYNAMIC_SWITCHING
))
{
min_perf
=
des_perf
;
des_perf
=
0
;
}
value
&=
~
AMD_CPPC_MIN_PERF
(
~
0L
);
value
|=
AMD_CPPC_MIN_PERF
(
min_perf
);
...
...
@@ -445,7 +468,7 @@ static int amd_pstate_target(struct cpufreq_policy *policy,
cpufreq_freq_transition_begin
(
policy
,
&
freqs
);
amd_pstate_update
(
cpudata
,
min_perf
,
des_perf
,
max_perf
,
false
);
max_perf
,
false
,
policy
->
governor
->
flags
);
cpufreq_freq_transition_end
(
policy
,
&
freqs
,
false
);
return
0
;
...
...
@@ -479,7 +502,8 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
if
(
max_perf
<
min_perf
)
max_perf
=
min_perf
;
amd_pstate_update
(
cpudata
,
min_perf
,
des_perf
,
max_perf
,
true
);
amd_pstate_update
(
cpudata
,
min_perf
,
des_perf
,
max_perf
,
true
,
policy
->
governor
->
flags
);
cpufreq_cpu_put
(
policy
);
}
...
...
@@ -816,6 +840,98 @@ static ssize_t show_energy_performance_preference(
return
sysfs_emit
(
buf
,
"%s
\n
"
,
energy_perf_strings
[
preference
]);
}
static
void
amd_pstate_driver_cleanup
(
void
)
{
amd_pstate_enable
(
false
);
cppc_state
=
AMD_PSTATE_DISABLE
;
current_pstate_driver
=
NULL
;
}
static
int
amd_pstate_register_driver
(
int
mode
)
{
int
ret
;
if
(
mode
==
AMD_PSTATE_PASSIVE
||
mode
==
AMD_PSTATE_GUIDED
)
current_pstate_driver
=
&
amd_pstate_driver
;
else
if
(
mode
==
AMD_PSTATE_ACTIVE
)
current_pstate_driver
=
&
amd_pstate_epp_driver
;
else
return
-
EINVAL
;
cppc_state
=
mode
;
ret
=
cpufreq_register_driver
(
current_pstate_driver
);
if
(
ret
)
{
amd_pstate_driver_cleanup
();
return
ret
;
}
return
0
;
}
static
int
amd_pstate_unregister_driver
(
int
dummy
)
{
cpufreq_unregister_driver
(
current_pstate_driver
);
amd_pstate_driver_cleanup
();
return
0
;
}
static
int
amd_pstate_change_mode_without_dvr_change
(
int
mode
)
{
int
cpu
=
0
;
cppc_state
=
mode
;
if
(
boot_cpu_has
(
X86_FEATURE_CPPC
)
||
cppc_state
==
AMD_PSTATE_ACTIVE
)
return
0
;
for_each_present_cpu
(
cpu
)
{
cppc_set_auto_sel
(
cpu
,
(
cppc_state
==
AMD_PSTATE_PASSIVE
)
?
0
:
1
);
}
return
0
;
}
static
int
amd_pstate_change_driver_mode
(
int
mode
)
{
int
ret
;
ret
=
amd_pstate_unregister_driver
(
0
);
if
(
ret
)
return
ret
;
ret
=
amd_pstate_register_driver
(
mode
);
if
(
ret
)
return
ret
;
return
0
;
}
static
cppc_mode_transition_fn
mode_state_machine
[
AMD_PSTATE_MAX
][
AMD_PSTATE_MAX
]
=
{
[
AMD_PSTATE_DISABLE
]
=
{
[
AMD_PSTATE_DISABLE
]
=
NULL
,
[
AMD_PSTATE_PASSIVE
]
=
amd_pstate_register_driver
,
[
AMD_PSTATE_ACTIVE
]
=
amd_pstate_register_driver
,
[
AMD_PSTATE_GUIDED
]
=
amd_pstate_register_driver
,
},
[
AMD_PSTATE_PASSIVE
]
=
{
[
AMD_PSTATE_DISABLE
]
=
amd_pstate_unregister_driver
,
[
AMD_PSTATE_PASSIVE
]
=
NULL
,
[
AMD_PSTATE_ACTIVE
]
=
amd_pstate_change_driver_mode
,
[
AMD_PSTATE_GUIDED
]
=
amd_pstate_change_mode_without_dvr_change
,
},
[
AMD_PSTATE_ACTIVE
]
=
{
[
AMD_PSTATE_DISABLE
]
=
amd_pstate_unregister_driver
,
[
AMD_PSTATE_PASSIVE
]
=
amd_pstate_change_driver_mode
,
[
AMD_PSTATE_ACTIVE
]
=
NULL
,
[
AMD_PSTATE_GUIDED
]
=
amd_pstate_change_driver_mode
,
},
[
AMD_PSTATE_GUIDED
]
=
{
[
AMD_PSTATE_DISABLE
]
=
amd_pstate_unregister_driver
,
[
AMD_PSTATE_PASSIVE
]
=
amd_pstate_change_mode_without_dvr_change
,
[
AMD_PSTATE_ACTIVE
]
=
amd_pstate_change_driver_mode
,
[
AMD_PSTATE_GUIDED
]
=
NULL
,
},
};
static
ssize_t
amd_pstate_show_status
(
char
*
buf
)
{
if
(
!
current_pstate_driver
)
...
...
@@ -824,55 +940,22 @@ static ssize_t amd_pstate_show_status(char *buf)
return
sysfs_emit
(
buf
,
"%s
\n
"
,
amd_pstate_mode_string
[
cppc_state
]);
}
static
void
amd_pstate_driver_cleanup
(
void
)
{
current_pstate_driver
=
NULL
;
}
static
int
amd_pstate_update_status
(
const
char
*
buf
,
size_t
size
)
{
int
ret
=
0
;
int
mode_idx
;
if
(
size
>
7
||
size
<
6
)
if
(
size
>
strlen
(
"passive"
)
||
size
<
strlen
(
"active"
)
)
return
-
EINVAL
;
mode_idx
=
get_mode_idx_from_str
(
buf
,
size
);
switch
(
mode_idx
)
{
case
AMD_PSTATE_DISABLE
:
if
(
current_pstate_driver
)
{
cpufreq_unregister_driver
(
current_pstate_driver
);
amd_pstate_driver_cleanup
();
}
break
;
case
AMD_PSTATE_PASSIVE
:
if
(
current_pstate_driver
)
{
if
(
current_pstate_driver
==
&
amd_pstate_driver
)
return
0
;
cpufreq_unregister_driver
(
current_pstate_driver
);
}
mode_idx
=
get_mode_idx_from_str
(
buf
,
size
);
current_pstate_driver
=
&
amd_pstate_driver
;
cppc_state
=
AMD_PSTATE_PASSIVE
;
ret
=
cpufreq_register_driver
(
current_pstate_driver
);
break
;
case
AMD_PSTATE_ACTIVE
:
if
(
current_pstate_driver
)
{
if
(
current_pstate_driver
==
&
amd_pstate_epp_driver
)
return
0
;
cpufreq_unregister_driver
(
current_pstate_driver
);
}
if
(
mode_idx
<
0
||
mode_idx
>=
AMD_PSTATE_MAX
)
return
-
EINVAL
;
current_pstate_driver
=
&
amd_pstate_epp_driver
;
cppc_state
=
AMD_PSTATE_ACTIVE
;
ret
=
cpufreq_register_driver
(
current_pstate_driver
);
break
;
default:
ret
=
-
EINVAL
;
break
;
}
if
(
mode_state_machine
[
cppc_state
][
mode_idx
])
return
mode_state_machine
[
cppc_state
][
mode_idx
](
mode_idx
);
return
ret
;
return
0
;
}
static
ssize_t
show_status
(
struct
kobject
*
kobj
,
...
...
@@ -1277,7 +1360,7 @@ static int __init amd_pstate_init(void)
/* capability check */
if
(
boot_cpu_has
(
X86_FEATURE_CPPC
))
{
pr_debug
(
"AMD CPPC MSR based functionality is supported
\n
"
);
if
(
cppc_state
==
AMD_PSTATE_PASS
IVE
)
if
(
cppc_state
!=
AMD_PSTATE_ACT
IVE
)
current_pstate_driver
->
adjust_perf
=
amd_pstate_adjust_perf
;
}
else
{
pr_debug
(
"AMD CPPC shared memory based functionality is supported
\n
"
);
...
...
@@ -1339,7 +1422,7 @@ static int __init amd_pstate_param(char *str)
if
(
cppc_state
==
AMD_PSTATE_ACTIVE
)
current_pstate_driver
=
&
amd_pstate_epp_driver
;
if
(
cppc_state
==
AMD_PSTATE_PASSIVE
)
if
(
cppc_state
==
AMD_PSTATE_PASSIVE
||
cppc_state
==
AMD_PSTATE_GUIDED
)
current_pstate_driver
=
&
amd_pstate_driver
;
return
0
;
...
...
drivers/cpufreq/cpufreq.c
View file @
68495221
...
...
@@ -73,6 +73,11 @@ static inline bool has_target(void)
return
cpufreq_driver
->
target_index
||
cpufreq_driver
->
target
;
}
bool
has_target_index
(
void
)
{
return
!!
cpufreq_driver
->
target_index
;
}
/* internal prototypes */
static
unsigned
int
__cpufreq_get
(
struct
cpufreq_policy
*
policy
);
static
int
cpufreq_init_governor
(
struct
cpufreq_policy
*
policy
);
...
...
@@ -725,9 +730,9 @@ static ssize_t store_##file_name \
unsigned long val; \
int ret; \
\
ret =
sscanf(buf, "%lu"
, &val); \
if (ret
!= 1
) \
return
-EINVAL
; \
ret =
kstrtoul(buf, 0
, &val); \
if (ret) \
return
ret
; \
\
ret = freq_qos_update_request(policy->object##_freq_req, val);\
return ret >= 0 ? count : ret; \
...
...
drivers/cpufreq/freq_table.c
View file @
68495221
...
...
@@ -355,8 +355,13 @@ int cpufreq_table_validate_and_sort(struct cpufreq_policy *policy)
{
int
ret
;
if
(
!
policy
->
freq_table
)
if
(
!
policy
->
freq_table
)
{
/* Freq table must be passed by drivers with target_index() */
if
(
has_target_index
())
return
-
EINVAL
;
return
0
;
}
ret
=
cpufreq_frequency_table_cpuinfo
(
policy
,
policy
->
freq_table
);
if
(
ret
)
...
...
drivers/cpufreq/intel_pstate.c
View file @
68495221
...
...
@@ -2384,12 +2384,6 @@ static const struct x86_cpu_id intel_pstate_cpu_ee_disable_ids[] = {
{}
};
static
const
struct
x86_cpu_id
intel_pstate_hwp_boost_ids
[]
=
{
X86_MATCH
(
SKYLAKE_X
,
core_funcs
),
X86_MATCH
(
SKYLAKE
,
core_funcs
),
{}
};
static
int
intel_pstate_init_cpu
(
unsigned
int
cpunum
)
{
struct
cpudata
*
cpu
;
...
...
@@ -2408,12 +2402,9 @@ static int intel_pstate_init_cpu(unsigned int cpunum)
cpu
->
epp_default
=
-
EINVAL
;
if
(
hwp_active
)
{
const
struct
x86_cpu_id
*
id
;
intel_pstate_hwp_enable
(
cpu
);
id
=
x86_match_cpu
(
intel_pstate_hwp_boost_ids
);
if
(
id
&&
intel_pstate_acpi_pm_profile_server
())
if
(
intel_pstate_acpi_pm_profile_server
())
hwp_boost
=
true
;
}
}
else
if
(
hwp_active
)
{
...
...
drivers/cpufreq/pmac32-cpufreq.c
View file @
68495221
...
...
@@ -546,7 +546,7 @@ static int pmac_cpufreq_init_7447A(struct device_node *cpunode)
{
struct
device_node
*
volt_gpio_np
;
if
(
of_get_property
(
cpunode
,
"dynamic-power-step"
,
NULL
)
==
NULL
)
if
(
!
of_property_read_bool
(
cpunode
,
"dynamic-power-step"
)
)
return
1
;
volt_gpio_np
=
of_find_node_by_name
(
NULL
,
"cpu-vcore-select"
);
...
...
@@ -576,7 +576,7 @@ static int pmac_cpufreq_init_750FX(struct device_node *cpunode)
u32
pvr
;
const
u32
*
value
;
if
(
of_get_property
(
cpunode
,
"dynamic-power-step"
,
NULL
)
==
NULL
)
if
(
!
of_property_read_bool
(
cpunode
,
"dynamic-power-step"
)
)
return
1
;
hi_freq
=
cur_freq
;
...
...
@@ -632,7 +632,7 @@ static int __init pmac_cpufreq_setup(void)
/* Check for 7447A based MacRISC3 */
if
(
of_machine_is_compatible
(
"MacRISC3"
)
&&
of_
get_property
(
cpunode
,
"dynamic-power-step"
,
NULL
)
&&
of_
property_read_bool
(
cpunode
,
"dynamic-power-step"
)
&&
PVR_VER
(
mfspr
(
SPRN_PVR
))
==
0x8003
)
{
pmac_cpufreq_init_7447A
(
cpunode
);
...
...
include/acpi/cppc_acpi.h
View file @
68495221
...
...
@@ -109,6 +109,7 @@ struct cppc_perf_caps {
u32
lowest_freq
;
u32
nominal_freq
;
u32
energy_perf
;
bool
auto_sel
;
};
struct
cppc_perf_ctrls
{
...
...
@@ -153,6 +154,8 @@ extern int cpc_read_ffh(int cpunum, struct cpc_reg *reg, u64 *val);
extern
int
cpc_write_ffh
(
int
cpunum
,
struct
cpc_reg
*
reg
,
u64
val
);
extern
int
cppc_get_epp_perf
(
int
cpunum
,
u64
*
epp_perf
);
extern
int
cppc_set_epp_perf
(
int
cpu
,
struct
cppc_perf_ctrls
*
perf_ctrls
,
bool
enable
);
extern
int
cppc_get_auto_sel_caps
(
int
cpunum
,
struct
cppc_perf_caps
*
perf_caps
);
extern
int
cppc_set_auto_sel
(
int
cpu
,
bool
enable
);
#else
/* !CONFIG_ACPI_CPPC_LIB */
static
inline
int
cppc_get_desired_perf
(
int
cpunum
,
u64
*
desired_perf
)
{
...
...
@@ -214,6 +217,14 @@ static inline int cppc_get_epp_perf(int cpunum, u64 *epp_perf)
{
return
-
ENOTSUPP
;
}
static
inline
int
cppc_set_auto_sel
(
int
cpu
,
bool
enable
)
{
return
-
ENOTSUPP
;
}
static
inline
int
cppc_get_auto_sel_caps
(
int
cpunum
,
struct
cppc_perf_caps
*
perf_caps
)
{
return
-
ENOTSUPP
;
}
#endif
/* !CONFIG_ACPI_CPPC_LIB */
#endif
/* _CPPC_ACPI_H*/
include/linux/amd-pstate.h
View file @
68495221
...
...
@@ -97,6 +97,7 @@ enum amd_pstate_mode {
AMD_PSTATE_DISABLE
=
0
,
AMD_PSTATE_PASSIVE
,
AMD_PSTATE_ACTIVE
,
AMD_PSTATE_GUIDED
,
AMD_PSTATE_MAX
,
};
...
...
@@ -104,6 +105,7 @@ static const char * const amd_pstate_mode_string[] = {
[
AMD_PSTATE_DISABLE
]
=
"disable"
,
[
AMD_PSTATE_PASSIVE
]
=
"passive"
,
[
AMD_PSTATE_ACTIVE
]
=
"active"
,
[
AMD_PSTATE_GUIDED
]
=
"guided"
,
NULL
,
};
#endif
/* _LINUX_AMD_PSTATE_H */
include/linux/cpufreq.h
View file @
68495221
...
...
@@ -237,6 +237,7 @@ bool cpufreq_supports_freq_invariance(void);
struct
kobject
*
get_governor_parent_kobj
(
struct
cpufreq_policy
*
policy
);
void
cpufreq_enable_fast_switch
(
struct
cpufreq_policy
*
policy
);
void
cpufreq_disable_fast_switch
(
struct
cpufreq_policy
*
policy
);
bool
has_target_index
(
void
);
#else
static
inline
unsigned
int
cpufreq_get
(
unsigned
int
cpu
)
{
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment