- 18 May, 2016 5 commits
-
-
Viresh Kumar authored
Prefix print messages with KBUILD_MODNAME, i.e 'cpufreq_schedutil: '. This helps to keep similar formatting for all the print messages particular to a file and identify those easily in kernel logs. Its already done this way for rest of the governors. Along with that, remove the (now) redundant bits from a print message. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Pankaj Gupta authored
simplified goto out in cpufreq_register_driver for increasing code readability Signed-off-by: Pankaj Gupta <pankaj.gupta@spreadtrum.com> Signed-off-by: Sanjeev Yadav <sanjeev.yadav@spreadtrum.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Rafael J. Wysocki authored
None of the cpufreq governors currently in the tree will ever fail an invocation of the ->governor() callback with the event argument equal to CPUFREQ_GOV_STOP (unless invoked with incorrect arguments which doesn't matter anyway) and it is rather difficult to imagine a valid reason for such a failure. Accordingly, rearrange the code in the core to make it clear that this call never fails. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
-
Rafael J. Wysocki authored
None of the cpufreq governors currently in the tree will ever fail an invocation of the ->governor() callback with the event argument equal to CPUFREQ_GOV_POLICY_EXIT (unless invoked with incorrect arguments which doesn't matter anyway) and it wouldn't really make sense to fail it, because the caller won't be able to handle that failure in a meaningful way. Accordingly, rearrange the code in the core to make it clear that this call never fails. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
-
Rafael J. Wysocki authored
One of the if () statements in intel_pstate_set_policy() causes another if () to be evaluated if the condition is true and it doesn't do anything else, so merge the two if () statements into one. No functional changes. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
-
- 11 May, 2016 8 commits
-
-
Rafael J. Wysocki authored
The comments and the core_busy variable name in get_target_pstate_use_performance() are totally confusing, so modify them to reflect what's going on. The results of the computations should be the same as before. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Rafael J. Wysocki authored
Notice that get_avg_pstate() can use sample.core_avg_perf instead of carrying the same division again, so make it do that. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Rafael J. Wysocki authored
The core_pct_busy field of struct sample actually contains the average performace during the last sampling period (in percent) and not the utilization of the core as suggested by its name which is confusing. For this reason, change the name of that field to core_avg_perf and rename the function that computes its value accordingly. Also notice that storing this value as percentage requires a costly integer multiplication to be carried out in a hot path, so instead store it as an "extended fixed point" value with more fraction bits and update the code using it accordingly (it is better to change the name of the field along with its meaning in one go than to make those two changes separately, as that would likely lead to more confusion). Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Chen Yu authored
Currently, in intel_pstate_clear_update_util_hook(), after clearing the utilization update hook, we leverage synchronize_sched() to deal with synchronization, which is a little bit time-costly because synchronize_sched() has to wait for all the CPUs to go through a grace period. Actually, the synchronize_sched() is not necessary if the utilization update hook has not been set for the given CPU yet, so make the driver check if that's the case and avoid the synchronize_sched() call then. Link: https://bugzilla.kernel.org/show_bug.cgi?id=116371Tested-by: Tian Ye <yex.tian@intel.com> Signed-off-by: Chen Yu <yu.c.chen@intel.com> [ rjw : Rebase ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Rafael J. Wysocki authored
-
Arnd Bergmann authored
CPU_FREQ_GOV_SCHEDUTIL gained a dependency on SMP, so now we get a warning if it gets selected by CPU_FREQ_DEFAULT_GOV_SCHEDUTIL without SMP: warning: (CPU_FREQ_DEFAULT_GOV_SCHEDUTIL) selects CPU_FREQ_GOV_SCHEDUTIL which has unmet direct dependencies (CPU_FREQ && SMP) This adds another dependency to avoid the problem. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: bf7cdff1 (cpufreq: schedutil: Make it depend on CONFIG_SMP) Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Akshay Adiga authored
When global and local pstate are equal in a powernv_target_index() call, we don't queue a timer. But we may have timer already queued for future. This could cause the timer to fire one additional time for no use. Signed-off-by: Akshay Adiga <akshay.adiga@linux.vnet.ibm.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Akshay Adiga authored
Fix a WARN_ON caused by smp_call_function_any() when irq is disabled, because of changes made in the patch ('cpufreq: powernv: Ramp-down global pstate slower than local-pstate') https://patchwork.ozlabs.org/patch/612058/ WARNING: CPU: 0 PID: 4 at kernel/smp.c:291 smp_call_function_single+0x170/0x180 Call Trace: [c0000007f648f9f0] [c0000007f648fa90] 0xc0000007f648fa90 (unreliable) [c0000007f648fa30] [c0000000001430e0] smp_call_function_any+0x170/0x1c0 [c0000007f648fa90] [c0000000007b4b00] powernv_cpufreq_target_index+0xe0/0x250 [c0000007f648fb00] [c0000000007ac9dc] __cpufreq_driver_target+0x20c/0x3d0 [c0000007f648fbc0] [c0000000007b1b4c] od_dbs_timer+0xcc/0x260 [c0000007f648fc10] [c0000000007b3024] dbs_work_handler+0x54/0xa0 [c0000007f648fc50] [c0000000000c49a8] process_one_work+0x1d8/0x590 [c0000007f648fce0] [c0000000000c4e08] worker_thread+0xa8/0x660 [c0000007f648fd80] [c0000000000cca88] kthread+0x108/0x130 [c0000007f648fe30] [c0000000000095e8] ret_from_kernel_thread+0x5c/0x74 - Calling smp_call_function_any() with interrupt disabled (through spin_lock_irqsave) could cause a deadlock, as smp_call_function_any() relies on the IPI to complete. This is detected in the smp_call_function_any() call and hence the WARN_ON. - As the spinlock (gpstates->lock) is only used to synchronize access of global_pstate_info between timer irq handler and target_index calls. And the timer irq handler just try_locks() hence it would not cause a deadlock. Hence could do without making spinlocks irq safe. - As the smp_call_function_any() is a blocking call and does not access global_pstates_info, it could reduce the critcal section by moving smp_call_function_any() after giving up the lock. Reported-by: Abdul Haleem <abdhalee@linux.vnet.linux.com> Signed-off-by: Akshay Adiga <akshay.adiga@linux.vnet.ibm.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 09 May, 2016 2 commits
-
-
Rafael J. Wysocki authored
intel_pstate_get() contains a local variable that's initialized but never used and it can be written in fewer lines of code, so clean it up. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
-
Rafael J. Wysocki authored
-
- 06 May, 2016 3 commits
-
-
Rafael J. Wysocki authored
Make the schedutil cpufreq governor depend on CONFIG_SMP, because the scheduler-provided utilization numbers used by it are only available with CONFIG_SMP set. Fixes: 9bdcb44e (cpufreq: schedutil: New governor based on scheduler utilization data) Reported-by: Steve Muckle <steve.muckle@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Rafael J. Wysocki authored
* pm-cpufreq-fixes: intel_pstate: Fix intel_pstate_get() cpufreq: intel_pstate: Fix HWP on boot CPU after system resume cpufreq: st: enable selective initialization based on the platform cpufreq: intel_pstate: Fix processing for turbo activation ratio
-
Rafael J. Wysocki authored
As reported in KBZ 69821: "With CONFIG_HZ_PERIODIC=y cpu stays at the lowest frequcency 800MHz even if usage goes to 100%, frequency does not scale up, the governor in use is ondemand. Neither works conservative. Performance and userspace governors work as expected. With CONFIG_NO_HZ_IDLE or CONFIG_NO_HZ_FULL cpu scales up with ondemand as expected." Analysis carried out by Chen Yu leads to the conclusion that the observed issue is due to idle_time in dbs_update() representing a negative number in which case the function will return 0 as the load (unless load is greater than 0 for another CPU sharing the policy), although that need not be the right choice. Indeed, idle_time representing a negative number means that during the last sampling interval the CPU was almost 100% busy on the rough average, so 100 should be returned as the load in that case. Modify the code accordingly and rearrange it to clarify the handling of all of the special cases in it. While at it, also avoid returning zero as the load if time_elapsed is 0 (it doesn't really make sense to return 0 then). Link: https://bugzilla.kernel.org/show_bug.cgi?id=69821Tested-by: Chen Yu <yu.c.chen@intel.com> Tested-by: Timo Valtoaho <timo.valtoaho@gmail.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
-
- 04 May, 2016 8 commits
-
-
Srinivas Pandruvada authored
When HWP (hardware P states) feature is active, the ACPI _PSS and _PPC is not used. So ignore processing for _PPC limits. Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Sudeep Holla authored
Currently when performing random CPU hot-plugs and suspend-to-ram(S2R) on systems using arm_big_little cpufreq driver, we get warnings similar to something like below: cpu cpu1: _opp_add: duplicate OPPs detected. Existing: freq: 600000000, volt: 800000, enabled: 1. New: freq: 600000000, volt: 800000, enabled: 1 This is mainly because the OPPs for the shared cpus are not set. We can just use dev_pm_opp_of_cpumask_add_table in case the OPPs are obtained from DT(arm_big_little_dt.c) or use dev_pm_opp_set_sharing_cpus if the OPPs are obtained by other means like firmware(e.g. scpi-cpufreq.c) Also now that the generic dev_pm_opp{,_of}_cpumask_remove_table can handle removal of opp table and entries for all associated CPUs, we can re-use dev_pm_opp{,_of}_cpumask_remove_table as free_opp_table in cpufreq_arm_bL_ops. This patch makes necessary changes to reuse the generic OPP functions for {init,free}_opp_table and thereby eliminating the warnings. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Rafael J. Wysocki authored
-
Sudeep Holla authored
Functions dev_pm_opp_of_{cpumask_,}remove_table removes/frees all the static OPP entries associated with the device and/or all cpus(in case of cpumask) that are created from DT. However the OPP entries are populated reading from the firmware or some different method using dev_pm_opp_add are marked dynamic and can't be removed using above functions. This patch adds non DT/OF versions of dev_pm_opp_{cpumask_,}remove_table to support the above mentioned usecase. This is in preparation to make use of the same in scpi-cpufreq.c Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Marc Gonzalez authored
Add tango4 compatible string to the list. Signed-off-by: Marc Gonzalez <marc_gonzalez@sigmadesigns.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Arnd Bergmann authored
The new use of dev_pm_opp_set_sharing_cpus resulted in a harmless compiler warning with CONFIG_CPUMASK_OFFSTACK=y: drivers/cpufreq/mvebu-cpufreq.c: In function 'armada_xp_pmsu_cpufreq_init': include/linux/cpumask.h:550:25: error: passing argument 2 of 'dev_pm_opp_set_sharing_cpus' discards 'const' qualifier from pointer target type [-Werror=discarded-qualifiers] The problem here is that cpumask_var_t gets passed by reference, but by declaring a 'const cpumask_var_t' argument, only the pointer is constant, not the actual mask. This is harmless because the function does not actually modify the mask. This patch changes the function prototypes for all of the related functions to pass a 'struct cpumask *' instead of 'cpumask_var_t', matching what most other such functions do in the kernel. This lets us mark all the other similar functions as taking a 'const' mask where possible, and it avoids the warning without any change in object code. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: 947bd567 (mvebu: Use dev_pm_opp_set_sharing_cpus() to mark OPP tables as shared) Acked-by: Pavel Machek <pavel@ucw.cz> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Sai Gurrappadi authored
Currently, the userspace governor only updates frequency on GOV_LIMITS if policy->cur falls outside policy->{min/max}. However, it is also necessary to update current frequency on GOV_LIMITS to match the user requested value if it can be achieved within the new policy->{max/min}. This was previously the behaviour in the governor until commit d1922f02 ("cpufreq: Simplify userspace governor") which incorrectly assumed that policy->cur == user requested frequency via scaling_setspeed. This won't be true if the user requested frequency falls outside policy->{min/max}. Ex: a temporary thermal cap throttled the user requested frequency. Fix this by storing the user requested frequency in a seperate variable. The governor will then try to achieve this request on every GOV_LIMITS change. Fixes: d1922f02 (cpufreq: Simplify userspace governor) Signed-off-by: Sai Gurrappadi <sgurrappadi@nvidia.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Rafael J. Wysocki authored
After commit 8fa520af "intel_pstate: Remove freq calculation from intel_pstate_calc_busy()" intel_pstate_get() calls get_avg_frequency() to compute the average frequency, which is problematic for two reasons. First, intel_pstate_get() may be invoked before the driver reads the CPU feedback registers for the first time and if that happens, get_avg_frequency() will attempt to divide by zero. Second, the get_avg_frequency() call in intel_pstate_get() is racy with respect to intel_pstate_sample() and it may end up returning completely meaningless values for this reason. Moreover, after commit 7349ec04 "intel_pstate: Move intel_pstate_calc_busy() into get_target_pstate_use_performance()" sample.core_pct_busy is never computed on Atom, but it is used in intel_pstate_adjust_busy_pstate() in that case too. To address those problems notice that if sample.core_pct_busy was used in the average frequency computation carried out by get_avg_frequency(), both the divide by zero problem and the race with respect to intel_pstate_sample() would be avoided. Accordingly, move the invocation of intel_pstate_calc_busy() from get_target_pstate_use_performance() to intel_pstate_update_util(), which also will take care of the uninitialized sample.core_pct_busy on Atom, and modify get_avg_frequency() to use sample.core_pct_busy as per the above. Reported-by: kernel test robot <ying.huang@linux.intel.com> Link: http://marc.info/?l=linux-kernel&m=146226437623173&w=4 Fixes: 8fa520af "intel_pstate: Remove freq calculation from intel_pstate_calc_busy()" Fixes: 7349ec04 "intel_pstate: Move intel_pstate_calc_busy() into get_target_pstate_use_performance()" Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 02 May, 2016 1 commit
-
-
Rafael J. Wysocki authored
Commit 41cfd64c "Update frequencies of policy->cpus only from ->set_policy()" changed the way the intel_pstate driver's ->set_policy callback updates the HWP (hardware-managed P-states) settings. A side effect of it is that if those settings are modified on the boot CPU during system suspend and wakeup, they will never be restored during subsequent system resume. To address this problem, allow cpufreq drivers that don't provide ->target or ->target_index callbacks to use ->suspend and ->resume callbacks and add a ->resume callback to intel_pstate to restore the HWP settings on the CPUs that belong to the given policy. Fixes: 41cfd64c "Update frequencies of policy->cpus only from ->set_policy()" Tested-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
-
- 28 Apr, 2016 10 commits
-
-
Sudeep Holla authored
The sti-cpufreq does unconditional registration of the cpufreq-dt driver which causes issue on an multi-platform build. For example, on Vexpress TC2 platform, we get the following error on boot: cpu cpu0: OPP-v2 not supported cpu cpu0: Not doing voltage scaling cpu: dev_pm_opp_of_cpumask_add_table: couldn't find opp table for cpu:0, -19 cpu cpu0: dev_pm_opp_get_max_volt_latency: Invalid regulator (-6) ... arm_big_little: bL_cpufreq_register: Failed registering platform driver: vexpress-spc, err: -17 The actual driver fails to initialise as cpufreq-dt is probed successfully, which is incorrect. This issue can happen to any platform not using cpufreq-dt in a multi-platform build. This patch adds a check to do selective initialization of the driver. Fixes: ab0ea257 (cpufreq: st: Provide runtime initialised driver for ST's platforms) Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Lee Jones <lee.jones@linaro.org> Cc: 4.5+ <stable@vger.kernel.org> # 4.5+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Viresh Kumar authored
Move cpufreq bits for mvebu into drivers/cpufreq/ directory, that's where they really belong to. Compiled tested only. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Viresh Kumar authored
There are no more users of platform-data for cpufreq-dt driver, get rid of it. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Viresh Kumar authored
That will allow us to avoid using cpufreq-dt platform data. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Viresh Kumar authored
Existing platforms, which do not support operating-points-v2, can explicitly tell the opp core that some of the CPUs share opp tables, with help of dev_pm_opp_set_sharing_cpus(). For such platforms, explicitly ask the opp core to provide list of CPUs sharing the opp table with current cpu device, before falling back to platform data. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Rafael J. Wysocki authored
-
Viresh Kumar authored
OPP core allows a platform to mark OPP table as shared, when the platform isn't using operating-points-v2 bindings. And, so there should be a non DT way of finding out if the OPP table is shared or not. This patch adds dev_pm_opp_get_sharing_cpus(), which first tries to get OPP sharing information from the opp-table (in case it is already marked as shared), otherwise it uses the existing DT way of finding sharing information. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Viresh Kumar authored
dev_pm_opp_set_sharing_cpus() isn't supposed to update the cpumask passed as its parameter, and so it should always have been marked 'const'. Do it now. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Viresh Kumar authored
Some of the routines have used -ENOSYS for the cases where the functionality isn't implemented in the kernel. But ENOSYS is supposed to be used only for syscalls. Replace that with -ENOTSUPP, which specifically means that the operation isn't supported. While at it, replace exiting -EINVAL errors for similar cases to -ENOTSUPP. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Rafael J. Wysocki authored
The name of the prev_cpu_wall field in struct cpu_dbs_info is confusing, because it doesn't represent wall time, but the previous update time as returned by get_cpu_idle_time() (that may be the current value of jiffies_64 in some cases, for example). Moreover, the names of some related variables in dbs_update() take that confusion further. Rename all of those things to make their names reflect the purpose more accurately. While at it, drop unnecessary parens from one of the updated expressions. No functional changes. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Chen Yu <yu.c.chen@intel.com>
-
- 27 Apr, 2016 3 commits
-
-
Srinivas Pandruvada authored
For platforms which are controlled via remove node manager, enable _PPC by default. These platforms are mostly categorized as enterprise server or performance servers. These platforms needs to go through some certifications tests, which tests control via _PPC. The relative risk of enabling by default is low as this is is less likely that these systems have broken _PSS table. Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Srinivas Pandruvada authored
When policy->max is changed via _PPC or sysfs and is more than the max non turbo frequency, it does not really change resulting performance in some processors. When policy->max results in a P-State ratio more than the turbo activation ratio, then processor can choose any P-State up to max turbo. So the user or _PPC setting has no value, but this can cause undesirable side effects like: - Showing reduced max percentage in Intel P-State sysfs - It can cause reduced max performance under certain boundary conditions: The requested max scaling frequency either via _PPC or via cpufreq-sysfs, will be converted into a fixed floating point max percent scale. In majority of the cases this will result in correct max. But not 100% of the time. If the _PPC is requested at a point where the calculation lead to a lower max, this can result in a lower P-State then expected and it will impact performance. Example of this condition using a Broadwell laptop with config TDP. ACPI _PSS table from a Broadwell laptop 2301000 2300000 2200000 2000000 1900000 1800000 1700000 1500000 1400000 1300000 1100000 1000000 900000 800000 600000 500000 The actual results by disabling config TDP so that we can get what is requested on or below 2300000Khz. scaling_max_freq Max Requested P-State Resultant scaling max ---------------------------------------- ---------------------- 2400000 18 2900000 (max turbo) 2300000 17 2300000 (max physical non turbo) 2200000 15 2100000 2100000 15 2100000 2000000 13 1900000 1900000 13 1900000 1800000 12 1800000 1700000 11 1700000 1600000 10 1600000 1500000 f 1500000 1400000 e 1400000 1300000 d 1300000 1200000 c 1200000 1100000 a 1000000 1000000 a 1000000 900000 9 900000 800000 8 800000 700000 7 700000 600000 6 600000 500000 5 500000 ------------------------------------------------------------------ Now set the config TDP level 1 ratio as 0x0b (equivalent to 1100000KHz) in BIOS (not every system will let you adjust this). The turbo activation ratio will be set to one less than that, which will be 0x0a (So any request above 1000000KHz should result in turbo region assuming no thermal limits). Here _PPC will request max to 1100000KHz (which basically should still result in turbo as this is more than the turbo activation ratio up to max allowable turbo frequency), but actual calculation resulted in a max ceiling P-State which is 0x0a. So under any load condition, this driver will not request turbo P-States. This will be a huge performance hit. When config TDP feature is ON, if the _PPC points to a frequency above turbo activation ratio, the performance can still reach max turbo. In this case we don't need to treat this as the reduced frequency in set_policy callback. In this change when config TDP is active (by checking if the physical max non turbo ratio is more than the current max non turbo ratio), any request above current max non turbo is treated as full performance. Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> [ rjw : Minor cleanups ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Srinivas Pandruvada authored
Use ACPI _PPC notification to limit max P state driver will request. ACPI _PPC change notification is sent by BIOS to limit max P state in several cases: - Reduce impact of platform thermal condition - When Config TDP feature is used, a changed _PPC is sent to follow TDP change - Remote node managers in server want to control platform power via baseboard management controller (BMC) This change registers with ACPI processor performance lib so that _PPC changes are notified to cpufreq core, which in turns will result in call to .setpolicy() callback. Also the way _PSS table identifies a turbo frequency is not compatible to max turbo frequency in intel_pstate, so the very first entry in _PSS needs to be adjusted. This feature can be turned on by using kernel parameters: intel_pstate=support_acpi_ppc Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> [ rjw: Minor cleanups ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-