Commit 5ddbecb4 authored by Rafael J. Wysocki's avatar Rafael J. Wysocki

Merge branch 'cpufreq/arm/linux-next' of git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/pm

Pull ARM cpufreq updates for v5.14-rc1 from Viresh Kumar:

"- Add frequency invariance support for CPPC driver again and related
   fixes/changes."

 - Minor changes/cleanups for Meditak driver (Fabien Parent and Seiya
   Wang), Qcom platform (Sibi Sankar), and SCMI driver (Christophe
   JAILLET).

 - New bindings for generic performance domains (Sudeep Holla).

 - Rename black/white-lists (Viresh Kumar)."

* 'cpufreq/arm/linux-next' of git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/pm:
  cpufreq: CPPC: Add support for frequency invariance
  arch_topology: Avoid use-after-free for scale_freq_data
  cpufreq: CPPC: Pass structure instance by reference
  cpufreq: CPPC: Fix potential memleak in cppc_cpufreq_cpu_init
  dt-bindings: cpufreq: update cpu type and clock name for MT8173 SoC
  clk: mediatek: remove deprecated CLK_INFRA_CA57SEL for MT8173 SoC
  cpufreq: dt: Rename black/white-lists
  cpufreq: scmi: Fix an error message
  cpufreq: mediatek: add support for mt8365
  dt-bindings: dvfs: Add support for generic performance domains
  cpufreq: blacklist SC7280 in cpufreq-dt-platdev
parents b3beca76 c503c193
...@@ -257,6 +257,13 @@ properties: ...@@ -257,6 +257,13 @@ properties:
where voltage is in V, frequency is in MHz. where voltage is in V, frequency is in MHz.
performance-domains:
maxItems: 1
description:
List of phandles and performance domain specifiers, as defined by
bindings of the performance domain provider. See also
dvfs/performance-domain.yaml.
power-domains: power-domains:
description: description:
List of phandles and PM domain specifiers, as defined by bindings of the List of phandles and PM domain specifiers, as defined by bindings of the
......
...@@ -202,11 +202,11 @@ Example 2 (MT8173 SoC): ...@@ -202,11 +202,11 @@ Example 2 (MT8173 SoC):
cpu2: cpu@100 { cpu2: cpu@100 {
device_type = "cpu"; device_type = "cpu";
compatible = "arm,cortex-a57"; compatible = "arm,cortex-a72";
reg = <0x100>; reg = <0x100>;
enable-method = "psci"; enable-method = "psci";
cpu-idle-states = <&CPU_SLEEP_0>; cpu-idle-states = <&CPU_SLEEP_0>;
clocks = <&infracfg CLK_INFRA_CA57SEL>, clocks = <&infracfg CLK_INFRA_CA72SEL>,
<&apmixedsys CLK_APMIXED_MAINPLL>; <&apmixedsys CLK_APMIXED_MAINPLL>;
clock-names = "cpu", "intermediate"; clock-names = "cpu", "intermediate";
operating-points-v2 = <&cpu_opp_table_b>; operating-points-v2 = <&cpu_opp_table_b>;
...@@ -214,11 +214,11 @@ Example 2 (MT8173 SoC): ...@@ -214,11 +214,11 @@ Example 2 (MT8173 SoC):
cpu3: cpu@101 { cpu3: cpu@101 {
device_type = "cpu"; device_type = "cpu";
compatible = "arm,cortex-a57"; compatible = "arm,cortex-a72";
reg = <0x101>; reg = <0x101>;
enable-method = "psci"; enable-method = "psci";
cpu-idle-states = <&CPU_SLEEP_0>; cpu-idle-states = <&CPU_SLEEP_0>;
clocks = <&infracfg CLK_INFRA_CA57SEL>, clocks = <&infracfg CLK_INFRA_CA72SEL>,
<&apmixedsys CLK_APMIXED_MAINPLL>; <&apmixedsys CLK_APMIXED_MAINPLL>;
clock-names = "cpu", "intermediate"; clock-names = "cpu", "intermediate";
operating-points-v2 = <&cpu_opp_table_b>; operating-points-v2 = <&cpu_opp_table_b>;
......
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/dvfs/performance-domain.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Generic performance domains
maintainers:
- Sudeep Holla <sudeep.holla@arm.com>
description: |+
This binding is intended for performance management of groups of devices or
CPUs that run in the same performance domain. Performance domains must not
be confused with power domains. A performance domain is defined by a set
of devices that always have to run at the same performance level. For a given
performance domain, there is a single point of control that affects all the
devices in the domain, making it impossible to set the performance level of
an individual device in the domain independently from other devices in
that domain. For example, a set of CPUs that share a voltage domain, and
have a common frequency control, is said to be in the same performance
domain.
This device tree binding can be used to bind performance domain consumer
devices with their performance domains provided by performance domain
providers. A performance domain provider can be represented by any node in
the device tree and can provide one or more performance domains. A consumer
node can refer to the provider by a phandle and a set of phandle arguments
(so called performance domain specifiers) of length specified by the
\#performance-domain-cells property in the performance domain provider node.
select: true
properties:
"#performance-domain-cells":
description:
Number of cells in a performance domain specifier. Typically 0 for nodes
representing a single performance domain and 1 for nodes providing
multiple performance domains (e.g. performance controllers), but can be
any value as specified by device tree binding documentation of particular
provider.
enum: [ 0, 1 ]
performance-domains:
$ref: '/schemas/types.yaml#/definitions/phandle-array'
maxItems: 1
description:
A phandle and performance domain specifier as defined by bindings of the
performance controller/provider specified by phandle.
additionalProperties: true
examples:
- |
performance: performance-controller@12340000 {
compatible = "qcom,cpufreq-hw";
reg = <0x12340000 0x1000>;
#performance-domain-cells = <1>;
};
// The node above defines a performance controller that is a performance
// domain provider and expects one cell as its phandle argument.
cpus {
#address-cells = <2>;
#size-cells = <0>;
cpu@0 {
device_type = "cpu";
compatible = "arm,cortex-a57";
reg = <0x0 0x0>;
performance-domains = <&performance 1>;
};
};
...@@ -18,10 +18,11 @@ ...@@ -18,10 +18,11 @@
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/percpu.h> #include <linux/percpu.h>
#include <linux/rcupdate.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/smp.h> #include <linux/smp.h>
static DEFINE_PER_CPU(struct scale_freq_data *, sft_data); static DEFINE_PER_CPU(struct scale_freq_data __rcu *, sft_data);
static struct cpumask scale_freq_counters_mask; static struct cpumask scale_freq_counters_mask;
static bool scale_freq_invariant; static bool scale_freq_invariant;
...@@ -66,16 +67,20 @@ void topology_set_scale_freq_source(struct scale_freq_data *data, ...@@ -66,16 +67,20 @@ void topology_set_scale_freq_source(struct scale_freq_data *data,
if (cpumask_empty(&scale_freq_counters_mask)) if (cpumask_empty(&scale_freq_counters_mask))
scale_freq_invariant = topology_scale_freq_invariant(); scale_freq_invariant = topology_scale_freq_invariant();
rcu_read_lock();
for_each_cpu(cpu, cpus) { for_each_cpu(cpu, cpus) {
sfd = per_cpu(sft_data, cpu); sfd = rcu_dereference(*per_cpu_ptr(&sft_data, cpu));
/* Use ARCH provided counters whenever possible */ /* Use ARCH provided counters whenever possible */
if (!sfd || sfd->source != SCALE_FREQ_SOURCE_ARCH) { if (!sfd || sfd->source != SCALE_FREQ_SOURCE_ARCH) {
per_cpu(sft_data, cpu) = data; rcu_assign_pointer(per_cpu(sft_data, cpu), data);
cpumask_set_cpu(cpu, &scale_freq_counters_mask); cpumask_set_cpu(cpu, &scale_freq_counters_mask);
} }
} }
rcu_read_unlock();
update_scale_freq_invariant(true); update_scale_freq_invariant(true);
} }
EXPORT_SYMBOL_GPL(topology_set_scale_freq_source); EXPORT_SYMBOL_GPL(topology_set_scale_freq_source);
...@@ -86,22 +91,32 @@ void topology_clear_scale_freq_source(enum scale_freq_source source, ...@@ -86,22 +91,32 @@ void topology_clear_scale_freq_source(enum scale_freq_source source,
struct scale_freq_data *sfd; struct scale_freq_data *sfd;
int cpu; int cpu;
rcu_read_lock();
for_each_cpu(cpu, cpus) { for_each_cpu(cpu, cpus) {
sfd = per_cpu(sft_data, cpu); sfd = rcu_dereference(*per_cpu_ptr(&sft_data, cpu));
if (sfd && sfd->source == source) { if (sfd && sfd->source == source) {
per_cpu(sft_data, cpu) = NULL; rcu_assign_pointer(per_cpu(sft_data, cpu), NULL);
cpumask_clear_cpu(cpu, &scale_freq_counters_mask); cpumask_clear_cpu(cpu, &scale_freq_counters_mask);
} }
} }
rcu_read_unlock();
/*
* Make sure all references to previous sft_data are dropped to avoid
* use-after-free races.
*/
synchronize_rcu();
update_scale_freq_invariant(false); update_scale_freq_invariant(false);
} }
EXPORT_SYMBOL_GPL(topology_clear_scale_freq_source); EXPORT_SYMBOL_GPL(topology_clear_scale_freq_source);
void topology_scale_freq_tick(void) void topology_scale_freq_tick(void)
{ {
struct scale_freq_data *sfd = *this_cpu_ptr(&sft_data); struct scale_freq_data *sfd = rcu_dereference_sched(*this_cpu_ptr(&sft_data));
if (sfd) if (sfd)
sfd->set_freq_scale(); sfd->set_freq_scale();
......
...@@ -19,6 +19,16 @@ config ACPI_CPPC_CPUFREQ ...@@ -19,6 +19,16 @@ config ACPI_CPPC_CPUFREQ
If in doubt, say N. If in doubt, say N.
config ACPI_CPPC_CPUFREQ_FIE
bool "Frequency Invariance support for CPPC cpufreq driver"
depends on ACPI_CPPC_CPUFREQ && GENERIC_ARCH_TOPOLOGY
default y
help
This extends frequency invariance support in the CPPC cpufreq driver,
by using CPPC delivered and reference performance counters.
If in doubt, say N.
config ARM_ALLWINNER_SUN50I_CPUFREQ_NVMEM config ARM_ALLWINNER_SUN50I_CPUFREQ_NVMEM
tristate "Allwinner nvmem based SUN50I CPUFreq driver" tristate "Allwinner nvmem based SUN50I CPUFreq driver"
depends on ARCH_SUNXI depends on ARCH_SUNXI
......
...@@ -10,14 +10,18 @@ ...@@ -10,14 +10,18 @@
#define pr_fmt(fmt) "CPPC Cpufreq:" fmt #define pr_fmt(fmt) "CPPC Cpufreq:" fmt
#include <linux/arch_topology.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/cpufreq.h> #include <linux/cpufreq.h>
#include <linux/dmi.h> #include <linux/dmi.h>
#include <linux/irq_work.h>
#include <linux/kthread.h>
#include <linux/time.h> #include <linux/time.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <uapi/linux/sched/types.h>
#include <asm/unaligned.h> #include <asm/unaligned.h>
...@@ -57,6 +61,216 @@ static struct cppc_workaround_oem_info wa_info[] = { ...@@ -57,6 +61,216 @@ static struct cppc_workaround_oem_info wa_info[] = {
} }
}; };
#ifdef CONFIG_ACPI_CPPC_CPUFREQ_FIE
/* Frequency invariance support */
struct cppc_freq_invariance {
int cpu;
struct irq_work irq_work;
struct kthread_work work;
struct cppc_perf_fb_ctrs prev_perf_fb_ctrs;
struct cppc_cpudata *cpu_data;
};
static DEFINE_PER_CPU(struct cppc_freq_invariance, cppc_freq_inv);
static struct kthread_worker *kworker_fie;
static struct cpufreq_driver cppc_cpufreq_driver;
static unsigned int hisi_cppc_cpufreq_get_rate(unsigned int cpu);
static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data,
struct cppc_perf_fb_ctrs *fb_ctrs_t0,
struct cppc_perf_fb_ctrs *fb_ctrs_t1);
/**
* cppc_scale_freq_workfn - CPPC arch_freq_scale updater for frequency invariance
* @work: The work item.
*
* The CPPC driver register itself with the topology core to provide its own
* implementation (cppc_scale_freq_tick()) of topology_scale_freq_tick() which
* gets called by the scheduler on every tick.
*
* Note that the arch specific counters have higher priority than CPPC counters,
* if available, though the CPPC driver doesn't need to have any special
* handling for that.
*
* On an invocation of cppc_scale_freq_tick(), we schedule an irq work (since we
* reach here from hard-irq context), which then schedules a normal work item
* and cppc_scale_freq_workfn() updates the per_cpu arch_freq_scale variable
* based on the counter updates since the last tick.
*/
static void cppc_scale_freq_workfn(struct kthread_work *work)
{
struct cppc_freq_invariance *cppc_fi;
struct cppc_perf_fb_ctrs fb_ctrs = {0};
struct cppc_cpudata *cpu_data;
unsigned long local_freq_scale;
u64 perf;
cppc_fi = container_of(work, struct cppc_freq_invariance, work);
cpu_data = cppc_fi->cpu_data;
if (cppc_get_perf_ctrs(cppc_fi->cpu, &fb_ctrs)) {
pr_warn("%s: failed to read perf counters\n", __func__);
return;
}
perf = cppc_perf_from_fbctrs(cpu_data, &cppc_fi->prev_perf_fb_ctrs,
&fb_ctrs);
cppc_fi->prev_perf_fb_ctrs = fb_ctrs;
perf <<= SCHED_CAPACITY_SHIFT;
local_freq_scale = div64_u64(perf, cpu_data->perf_caps.highest_perf);
/* This can happen due to counter's overflow */
if (unlikely(local_freq_scale > 1024))
local_freq_scale = 1024;
per_cpu(arch_freq_scale, cppc_fi->cpu) = local_freq_scale;
}
static void cppc_irq_work(struct irq_work *irq_work)
{
struct cppc_freq_invariance *cppc_fi;
cppc_fi = container_of(irq_work, struct cppc_freq_invariance, irq_work);
kthread_queue_work(kworker_fie, &cppc_fi->work);
}
static void cppc_scale_freq_tick(void)
{
struct cppc_freq_invariance *cppc_fi = &per_cpu(cppc_freq_inv, smp_processor_id());
/*
* cppc_get_perf_ctrs() can potentially sleep, call that from the right
* context.
*/
irq_work_queue(&cppc_fi->irq_work);
}
static struct scale_freq_data cppc_sftd = {
.source = SCALE_FREQ_SOURCE_CPPC,
.set_freq_scale = cppc_scale_freq_tick,
};
static void cppc_cpufreq_cpu_fie_init(struct cpufreq_policy *policy)
{
struct cppc_freq_invariance *cppc_fi;
int cpu, ret;
if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate)
return;
for_each_cpu(cpu, policy->cpus) {
cppc_fi = &per_cpu(cppc_freq_inv, cpu);
cppc_fi->cpu = cpu;
cppc_fi->cpu_data = policy->driver_data;
kthread_init_work(&cppc_fi->work, cppc_scale_freq_workfn);
init_irq_work(&cppc_fi->irq_work, cppc_irq_work);
ret = cppc_get_perf_ctrs(cpu, &cppc_fi->prev_perf_fb_ctrs);
if (ret) {
pr_warn("%s: failed to read perf counters for cpu:%d: %d\n",
__func__, cpu, ret);
/*
* Don't abort if the CPU was offline while the driver
* was getting registered.
*/
if (cpu_online(cpu))
return;
}
}
/* Register for freq-invariance */
topology_set_scale_freq_source(&cppc_sftd, policy->cpus);
}
/*
* We free all the resources on policy's removal and not on CPU removal as the
* irq-work are per-cpu and the hotplug core takes care of flushing the pending
* irq-works (hint: smpcfd_dying_cpu()) on CPU hotplug. Even if the kthread-work
* fires on another CPU after the concerned CPU is removed, it won't harm.
*
* We just need to make sure to remove them all on policy->exit().
*/
static void cppc_cpufreq_cpu_fie_exit(struct cpufreq_policy *policy)
{
struct cppc_freq_invariance *cppc_fi;
int cpu;
if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate)
return;
/* policy->cpus will be empty here, use related_cpus instead */
topology_clear_scale_freq_source(SCALE_FREQ_SOURCE_CPPC, policy->related_cpus);
for_each_cpu(cpu, policy->related_cpus) {
cppc_fi = &per_cpu(cppc_freq_inv, cpu);
irq_work_sync(&cppc_fi->irq_work);
kthread_cancel_work_sync(&cppc_fi->work);
}
}
static void __init cppc_freq_invariance_init(void)
{
struct sched_attr attr = {
.size = sizeof(struct sched_attr),
.sched_policy = SCHED_DEADLINE,
.sched_nice = 0,
.sched_priority = 0,
/*
* Fake (unused) bandwidth; workaround to "fix"
* priority inheritance.
*/
.sched_runtime = 1000000,
.sched_deadline = 10000000,
.sched_period = 10000000,
};
int ret;
if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate)
return;
kworker_fie = kthread_create_worker(0, "cppc_fie");
if (IS_ERR(kworker_fie))
return;
ret = sched_setattr_nocheck(kworker_fie->task, &attr);
if (ret) {
pr_warn("%s: failed to set SCHED_DEADLINE: %d\n", __func__,
ret);
kthread_destroy_worker(kworker_fie);
return;
}
}
static void cppc_freq_invariance_exit(void)
{
if (cppc_cpufreq_driver.get == hisi_cppc_cpufreq_get_rate)
return;
kthread_destroy_worker(kworker_fie);
kworker_fie = NULL;
}
#else
static inline void cppc_cpufreq_cpu_fie_init(struct cpufreq_policy *policy)
{
}
static inline void cppc_cpufreq_cpu_fie_exit(struct cpufreq_policy *policy)
{
}
static inline void cppc_freq_invariance_init(void)
{
}
static inline void cppc_freq_invariance_exit(void)
{
}
#endif /* CONFIG_ACPI_CPPC_CPUFREQ_FIE */
/* Callback function used to retrieve the max frequency from DMI */ /* Callback function used to retrieve the max frequency from DMI */
static void cppc_find_dmi_mhz(const struct dmi_header *dm, void *private) static void cppc_find_dmi_mhz(const struct dmi_header *dm, void *private)
{ {
...@@ -256,6 +470,16 @@ static struct cppc_cpudata *cppc_cpufreq_get_cpu_data(unsigned int cpu) ...@@ -256,6 +470,16 @@ static struct cppc_cpudata *cppc_cpufreq_get_cpu_data(unsigned int cpu)
return NULL; return NULL;
} }
static void cppc_cpufreq_put_cpu_data(struct cpufreq_policy *policy)
{
struct cppc_cpudata *cpu_data = policy->driver_data;
list_del(&cpu_data->node);
free_cpumask_var(cpu_data->shared_cpu_map);
kfree(cpu_data);
policy->driver_data = NULL;
}
static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
{ {
unsigned int cpu = policy->cpu; unsigned int cpu = policy->cpu;
...@@ -309,7 +533,8 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) ...@@ -309,7 +533,8 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
default: default:
pr_debug("Unsupported CPU co-ord type: %d\n", pr_debug("Unsupported CPU co-ord type: %d\n",
policy->shared_type); policy->shared_type);
return -EFAULT; ret = -EFAULT;
goto out;
} }
/* /*
...@@ -324,10 +549,17 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) ...@@ -324,10 +549,17 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
cpu_data->perf_ctrls.desired_perf = caps->highest_perf; cpu_data->perf_ctrls.desired_perf = caps->highest_perf;
ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls); ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls);
if (ret) if (ret) {
pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n", pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n",
caps->highest_perf, cpu, ret); caps->highest_perf, cpu, ret);
goto out;
}
cppc_cpufreq_cpu_fie_init(policy);
return 0;
out:
cppc_cpufreq_put_cpu_data(policy);
return ret; return ret;
} }
...@@ -338,6 +570,8 @@ static int cppc_cpufreq_cpu_exit(struct cpufreq_policy *policy) ...@@ -338,6 +570,8 @@ static int cppc_cpufreq_cpu_exit(struct cpufreq_policy *policy)
unsigned int cpu = policy->cpu; unsigned int cpu = policy->cpu;
int ret; int ret;
cppc_cpufreq_cpu_fie_exit(policy);
cpu_data->perf_ctrls.desired_perf = caps->lowest_perf; cpu_data->perf_ctrls.desired_perf = caps->lowest_perf;
ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls); ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls);
...@@ -345,12 +579,7 @@ static int cppc_cpufreq_cpu_exit(struct cpufreq_policy *policy) ...@@ -345,12 +579,7 @@ static int cppc_cpufreq_cpu_exit(struct cpufreq_policy *policy)
pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n", pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n",
caps->lowest_perf, cpu, ret); caps->lowest_perf, cpu, ret);
/* Remove CPU node from list and free driver data for policy */ cppc_cpufreq_put_cpu_data(policy);
free_cpumask_var(cpu_data->shared_cpu_map);
list_del(&cpu_data->node);
kfree(policy->driver_data);
policy->driver_data = NULL;
return 0; return 0;
} }
...@@ -362,28 +591,25 @@ static inline u64 get_delta(u64 t1, u64 t0) ...@@ -362,28 +591,25 @@ static inline u64 get_delta(u64 t1, u64 t0)
return (u32)t1 - (u32)t0; return (u32)t1 - (u32)t0;
} }
static int cppc_get_rate_from_fbctrs(struct cppc_cpudata *cpu_data, static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data,
struct cppc_perf_fb_ctrs fb_ctrs_t0, struct cppc_perf_fb_ctrs *fb_ctrs_t0,
struct cppc_perf_fb_ctrs fb_ctrs_t1) struct cppc_perf_fb_ctrs *fb_ctrs_t1)
{ {
u64 delta_reference, delta_delivered; u64 delta_reference, delta_delivered;
u64 reference_perf, delivered_perf; u64 reference_perf;
reference_perf = fb_ctrs_t0.reference_perf; reference_perf = fb_ctrs_t0->reference_perf;
delta_reference = get_delta(fb_ctrs_t1.reference, delta_reference = get_delta(fb_ctrs_t1->reference,
fb_ctrs_t0.reference); fb_ctrs_t0->reference);
delta_delivered = get_delta(fb_ctrs_t1.delivered, delta_delivered = get_delta(fb_ctrs_t1->delivered,
fb_ctrs_t0.delivered); fb_ctrs_t0->delivered);
/* Check to avoid divide-by zero */ /* Check to avoid divide-by zero and invalid delivered_perf */
if (delta_reference || delta_delivered) if (!delta_reference || !delta_delivered)
delivered_perf = (reference_perf * delta_delivered) / return cpu_data->perf_ctrls.desired_perf;
delta_reference;
else
delivered_perf = cpu_data->perf_ctrls.desired_perf;
return cppc_cpufreq_perf_to_khz(cpu_data, delivered_perf); return (reference_perf * delta_delivered) / delta_reference;
} }
static unsigned int cppc_cpufreq_get_rate(unsigned int cpu) static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
...@@ -391,6 +617,7 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu) ...@@ -391,6 +617,7 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
struct cppc_perf_fb_ctrs fb_ctrs_t0 = {0}, fb_ctrs_t1 = {0}; struct cppc_perf_fb_ctrs fb_ctrs_t0 = {0}, fb_ctrs_t1 = {0};
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
struct cppc_cpudata *cpu_data = policy->driver_data; struct cppc_cpudata *cpu_data = policy->driver_data;
u64 delivered_perf;
int ret; int ret;
cpufreq_cpu_put(policy); cpufreq_cpu_put(policy);
...@@ -405,7 +632,10 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu) ...@@ -405,7 +632,10 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
if (ret) if (ret)
return ret; return ret;
return cppc_get_rate_from_fbctrs(cpu_data, fb_ctrs_t0, fb_ctrs_t1); delivered_perf = cppc_perf_from_fbctrs(cpu_data, &fb_ctrs_t0,
&fb_ctrs_t1);
return cppc_cpufreq_perf_to_khz(cpu_data, delivered_perf);
} }
static int cppc_cpufreq_set_boost(struct cpufreq_policy *policy, int state) static int cppc_cpufreq_set_boost(struct cpufreq_policy *policy, int state)
...@@ -506,14 +736,21 @@ static void cppc_check_hisi_workaround(void) ...@@ -506,14 +736,21 @@ static void cppc_check_hisi_workaround(void)
static int __init cppc_cpufreq_init(void) static int __init cppc_cpufreq_init(void)
{ {
int ret;
if ((acpi_disabled) || !acpi_cpc_valid()) if ((acpi_disabled) || !acpi_cpc_valid())
return -ENODEV; return -ENODEV;
INIT_LIST_HEAD(&cpu_data_list); INIT_LIST_HEAD(&cpu_data_list);
cppc_check_hisi_workaround(); cppc_check_hisi_workaround();
cppc_freq_invariance_init();
return cpufreq_register_driver(&cppc_cpufreq_driver); ret = cpufreq_register_driver(&cppc_cpufreq_driver);
if (ret)
cppc_freq_invariance_exit();
return ret;
} }
static inline void free_cpu_data(void) static inline void free_cpu_data(void)
...@@ -531,6 +768,7 @@ static inline void free_cpu_data(void) ...@@ -531,6 +768,7 @@ static inline void free_cpu_data(void)
static void __exit cppc_cpufreq_exit(void) static void __exit cppc_cpufreq_exit(void)
{ {
cpufreq_unregister_driver(&cppc_cpufreq_driver); cpufreq_unregister_driver(&cppc_cpufreq_driver);
cppc_freq_invariance_exit();
free_cpu_data(); free_cpu_data();
} }
......
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
* Machines for which the cpufreq device is *always* created, mostly used for * Machines for which the cpufreq device is *always* created, mostly used for
* platforms using "operating-points" (V1) property. * platforms using "operating-points" (V1) property.
*/ */
static const struct of_device_id whitelist[] __initconst = { static const struct of_device_id allowlist[] __initconst = {
{ .compatible = "allwinner,sun4i-a10", }, { .compatible = "allwinner,sun4i-a10", },
{ .compatible = "allwinner,sun5i-a10s", }, { .compatible = "allwinner,sun5i-a10s", },
{ .compatible = "allwinner,sun5i-a13", }, { .compatible = "allwinner,sun5i-a13", },
...@@ -100,7 +100,7 @@ static const struct of_device_id whitelist[] __initconst = { ...@@ -100,7 +100,7 @@ static const struct of_device_id whitelist[] __initconst = {
* Machines for which the cpufreq device is *not* created, mostly used for * Machines for which the cpufreq device is *not* created, mostly used for
* platforms using "operating-points-v2" property. * platforms using "operating-points-v2" property.
*/ */
static const struct of_device_id blacklist[] __initconst = { static const struct of_device_id blocklist[] __initconst = {
{ .compatible = "allwinner,sun50i-h6", }, { .compatible = "allwinner,sun50i-h6", },
{ .compatible = "arm,vexpress", }, { .compatible = "arm,vexpress", },
...@@ -126,6 +126,7 @@ static const struct of_device_id blacklist[] __initconst = { ...@@ -126,6 +126,7 @@ static const struct of_device_id blacklist[] __initconst = {
{ .compatible = "mediatek,mt8173", }, { .compatible = "mediatek,mt8173", },
{ .compatible = "mediatek,mt8176", }, { .compatible = "mediatek,mt8176", },
{ .compatible = "mediatek,mt8183", }, { .compatible = "mediatek,mt8183", },
{ .compatible = "mediatek,mt8365", },
{ .compatible = "mediatek,mt8516", }, { .compatible = "mediatek,mt8516", },
{ .compatible = "nvidia,tegra20", }, { .compatible = "nvidia,tegra20", },
...@@ -137,6 +138,7 @@ static const struct of_device_id blacklist[] __initconst = { ...@@ -137,6 +138,7 @@ static const struct of_device_id blacklist[] __initconst = {
{ .compatible = "qcom,msm8996", }, { .compatible = "qcom,msm8996", },
{ .compatible = "qcom,qcs404", }, { .compatible = "qcom,qcs404", },
{ .compatible = "qcom,sc7180", }, { .compatible = "qcom,sc7180", },
{ .compatible = "qcom,sc7280", },
{ .compatible = "qcom,sdm845", }, { .compatible = "qcom,sdm845", },
{ .compatible = "st,stih407", }, { .compatible = "st,stih407", },
...@@ -177,13 +179,13 @@ static int __init cpufreq_dt_platdev_init(void) ...@@ -177,13 +179,13 @@ static int __init cpufreq_dt_platdev_init(void)
if (!np) if (!np)
return -ENODEV; return -ENODEV;
match = of_match_node(whitelist, np); match = of_match_node(allowlist, np);
if (match) { if (match) {
data = match->data; data = match->data;
goto create_pdev; goto create_pdev;
} }
if (cpu0_node_has_opp_v2_prop() && !of_match_node(blacklist, np)) if (cpu0_node_has_opp_v2_prop() && !of_match_node(blocklist, np))
goto create_pdev; goto create_pdev;
of_node_put(np); of_node_put(np);
......
...@@ -537,6 +537,7 @@ static const struct of_device_id mtk_cpufreq_machines[] __initconst = { ...@@ -537,6 +537,7 @@ static const struct of_device_id mtk_cpufreq_machines[] __initconst = {
{ .compatible = "mediatek,mt8173", }, { .compatible = "mediatek,mt8173", },
{ .compatible = "mediatek,mt8176", }, { .compatible = "mediatek,mt8176", },
{ .compatible = "mediatek,mt8183", }, { .compatible = "mediatek,mt8183", },
{ .compatible = "mediatek,mt8365", },
{ .compatible = "mediatek,mt8516", }, { .compatible = "mediatek,mt8516", },
{ } { }
......
...@@ -174,7 +174,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy) ...@@ -174,7 +174,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
nr_opp = dev_pm_opp_get_opp_count(cpu_dev); nr_opp = dev_pm_opp_get_opp_count(cpu_dev);
if (nr_opp <= 0) { if (nr_opp <= 0) {
dev_err(cpu_dev, "%s: No OPPs for this device: %d\n", dev_err(cpu_dev, "%s: No OPPs for this device: %d\n",
__func__, ret); __func__, nr_opp);
ret = -ENODEV; ret = -ENODEV;
goto out_free_opp; goto out_free_opp;
......
...@@ -186,7 +186,6 @@ ...@@ -186,7 +186,6 @@
#define CLK_INFRA_PMICWRAP 11 #define CLK_INFRA_PMICWRAP 11
#define CLK_INFRA_CLK_13M 12 #define CLK_INFRA_CLK_13M 12
#define CLK_INFRA_CA53SEL 13 #define CLK_INFRA_CA53SEL 13
#define CLK_INFRA_CA57SEL 14 /* Deprecated. Don't use it. */
#define CLK_INFRA_CA72SEL 14 #define CLK_INFRA_CA72SEL 14
#define CLK_INFRA_NR_CLK 15 #define CLK_INFRA_NR_CLK 15
......
...@@ -37,6 +37,7 @@ bool topology_scale_freq_invariant(void); ...@@ -37,6 +37,7 @@ bool topology_scale_freq_invariant(void);
enum scale_freq_source { enum scale_freq_source {
SCALE_FREQ_SOURCE_CPUFREQ = 0, SCALE_FREQ_SOURCE_CPUFREQ = 0,
SCALE_FREQ_SOURCE_ARCH, SCALE_FREQ_SOURCE_ARCH,
SCALE_FREQ_SOURCE_CPPC,
}; };
struct scale_freq_data { struct scale_freq_data {
......
...@@ -7182,6 +7182,7 @@ int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr) ...@@ -7182,6 +7182,7 @@ int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr)
{ {
return __sched_setscheduler(p, attr, false, true); return __sched_setscheduler(p, attr, false, true);
} }
EXPORT_SYMBOL_GPL(sched_setattr_nocheck);
/** /**
* sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace. * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment