- 08 Oct, 2020 22 commits
-
-
Christophe Leroy authored
get_tbu() is redundant with mftbu() and is not used anymore. Remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1746e2d11ea90c3f45877e1fcc6c79ce96cf6b98.1601556145.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
get_tbl() is confusing as it returns the content of TBL register on PPC32 but the concatenation of TBL and TBU on PPC64. Use mftb() instead. Do the same with get_tbu() for consistency allthough it's name is less confusing. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/41573406a4eab98838decaa91649086fef1e6119.1601556145.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
No need to have two versions that are identical. CONFIG_PPC_CELL is only selected by PPC64 targets. CONFIG_E500 is the only PPC64 target selecting CONFIG_FSL_BOOK3E. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6bf23ec744aab4ba63506a011f6a145ea35d620d.1601556145.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
On PPC64, we have mftb(). On PPC32, we have mftbl() and an #define mftb() mftbl(). mftb() and mftbl() are equivalent, their purpose is to read the content of SPRN_TRBL, as returned by 'mftb' simplified instruction. binutils seems to define 'mftbl' instruction as an equivalent of 'mftb'. However in both 32 bits and 64 bits documentation, only 'mftb' is defined, and when performing a disassembly with objdump, the displayed instruction is 'mftb' No need to have two ways to do the same thing with different names, rename mftbl() to have only mftb(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/94dc68d3d9ef9eb549796d4b938b6ba0305a049b.1601556145.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
head_book3s_32.S is only built when CONFIG_PPC_BOOK3S_32 is selected. Remove all conditions based on CONFIG_PPC_BOOK3S_32 in the file. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1b68632425d8866d147aea9005004e4594672211.1601975100.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Unlike PPC64 which had a single head_64.S, PPC32 are multiple ones. There is the head_32.S, selected by default based on the value of BITS and overridden based on some CONFIG_ values. This leads to thinking that it may be selected by different types of PPC32 platform but indeed it ends up being selected by book3s/32 only. Make that explicit by: - Not doing any default selection based on BITS. - Renaming head_32.S to head_book3s_32.S. - Get head_book3s_32.S selected only by CONFIG_PPC_BOOK3S_32. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Fix head_$(BITS).o reference in arch/powerpc/Makefile] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/319d379f696412681c66a987cc75e6abf8f958d2.1601975100.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
At the time being, an early hash table is set up when CONFIG_KASAN is selected. There is nothing wrong with setting such an early hash table all the time, even if it is not used. This is a statically allocated 256 kB table which lies in the init data section. This makes the code simpler and may in the future allow to setup early IO mappings with fixmap instead of hard coding BATs. Put create_hpte() and flush_hash_pages() in the .ref.text section in order to avoid warning for the reference to early_hash[]. This reference is removed by MMU_init_hw_patch() before init memory is freed. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b8f8101c368b8a6451844a58d7bd7d83c14cf2aa.1601566529.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Move SPRN_PIT definition in reg.h. This allows to remove ifdef in get_dec() and set_dec() and makes them more readable. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3c9a6eb0fc040868ac59be66f338d08fd017668d.1601549945.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
601 is gone, get_tb_or_rtc() is equivalent to get_tb(). Replace the former by the later. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3e8a13ee83418630c753c30cb722ae682d5b2d39.1601362098.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Now that PowerPC 601 is gone, __USE_RTC() is never true. Remove it. That also leads to removing get_rtc() and get_rtcl() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4757e1ed21fe1968c761ae081d1f3d790a9673f8.1601362098.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
The removal of the 601 left some standalone blocks from former if/else. Drop the { } and re-indent. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/31c4cd093963f22831bf388449056ee045533d3b.1601362098.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
PowerPC 601 has been retired. Remove all associated specific code. CPU_FTRS_PPC601 has CPU_FTR_COHERENT_ICACHE and CPU_FTR_COMMON. CPU_FTR_COMMON is already present via other CPU_FTRS. None of the remaining CPU selects CPU_FTR_COHERENT_ICACHE. So CPU_FTRS_PPC601 can be removed from the possible features, hence can be removed completely. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/60b725d55e21beec3335175c20b77903ff98284f.1601362098.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Powerpc 601 is 25 years old. It is not selected by any defconfig. It requires a lot of special handling as it deviates from the standard 6xx. Retire it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/00a6948d659e017f8ca63437d1384222c3aede57.1601362098.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Those macros are now empty at all time. Drop them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7990bb63fc53e460bfa94f8040184881d9e6fbc3.1601362098.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
This config option isn't in any defconfig. The very first versions of Powerpc 601 have a bug which requires additional sync before and/or after some instructions. This was more than 25 years ago and time has come to retire those buggy versions of the 601 from the kernel. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/55b46bff16705b1ae7bf0a60ccd522b1010ebf75.1601362098.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
SYNC is usefull for Powerpc 601 only. On everything else, SYNC is empty. Remove it from code that is not made to run on 6xx. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/27951fa6c9a8f80724d1bc81a6117ac32343a55d.1601362098.git.christophe.leroy@csgroup.eu
-
Vaibhav Jain authored
Add NVDIMM_FAMILY_PAPR to the list of valid 'dimm_family_mask' acceptable by papr_scm. This is needed as since commit 92fe2aa8 ("libnvdimm: Validate command family indices") libnvdimm performs a validation of 'nd_cmd_pkg.nd_family' received as part of ND_CMD_CALL processing to ensure only known command families can use the general ND_CMD_CALL pass-through functionality. Without this change the ND_CMD_CALL pass-through targeting NVDIMM_FAMILY_PAPR error out with -EINVAL. Fixes: 92fe2aa8 ("libnvdimm: Validate command family indices") Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200913211904.24472-1-vaibhav@linux.ibm.com
-
Aneesh Kumar K.V authored
Make it consistent with other usages. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201007114836.282468-5-aneesh.kumar@linux.ibm.com
-
Aneesh Kumar K.V authored
Similar to commit 89c140bb ("pseries: Fix 64 bit logical memory block panic") make sure different variables tracking lmb_size are updated to be 64 bit. Fixes: af9d00e9 ("powerpc/mm/radix: Create separate mappings for hot-plugged memory") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201007114836.282468-4-aneesh.kumar@linux.ibm.com
-
Aneesh Kumar K.V authored
Similar to commit 89c140bb ("pseries: Fix 64 bit logical memory block panic") make sure different variables tracking lmb_size are updated to be 64 bit. This was found by code audit. Cc: stable@vger.kernel.org Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201007114836.282468-3-aneesh.kumar@linux.ibm.com
-
Aneesh Kumar K.V authored
Similar to commit 89c140bb ("pseries: Fix 64 bit logical memory block panic") make sure different variables tracking lmb_size are updated to be 64 bit. This was found by code audit. Cc: stable@vger.kernel.org Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Acked-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201007114836.282468-2-aneesh.kumar@linux.ibm.com
-
Nicholas Piggin authored
The inline execution path for the hardware assisted branch flush instruction failed to set CTR to the correct value before bcctr, causing a crash when the feature is enabled. Fixes: 4d24e21c ("powerpc/security: Allow for processors that flush the link stack using the special bcctr") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201007080605.64423-1-npiggin@gmail.com
-
- 07 Oct, 2020 7 commits
-
-
Kajol Jain authored
Patch here adds a cpumask attr to hv_gpci pmu along with ABI documentation. Primary use to expose the cpumask is for the perf tool which has the capability to parse the driver sysfs folder and understand the cpumask file. Having cpumask file will reduce the number of perf command line parameters (will avoid "-C" option in the perf tool command line). It can also notify the user which is the current cpu used to retrieve the counter data. command:# cat /sys/devices/hv_gpci/cpumask 0 Signed-off-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201003074943.338618-5-kjain@linux.ibm.com
-
Kajol Jain authored
Patch here adds cpu hotplug functions to hv_gpci pmu. A new cpuhp_state "CPUHP_AP_PERF_POWERPC_HV_GPCI_ONLINE" enum is added. The online callback function updates the cpumask only if its empty. As the primary intention of adding hotplug support is to designate a CPU to make HCALL to collect the counter data. The offline function test and clear corresponding cpu in a cpumask and update cpumask to any other active cpu. Signed-off-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201003074943.338618-4-kjain@linux.ibm.com
-
Kajol Jain authored
This patch adds ABI documentation for hv-gpci event format. Signed-off-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201003074943.338618-3-kjain@linux.ibm.com
-
Kajol Jain authored
This patch adds ABI documentation for hv-24x7 format. Signed-off-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201003074943.338618-2-kjain@linux.ibm.com
-
Kajol Jain authored
Commit 9e9f6010 ("powerpc/perf/{hv-gpci, hv-common}: generate requests with counters annotated") adds a framework for defining gpci counters. In this patch, they adds starting_index value as '0xffffffffffffffff'. which is wrong as starting_index is of size 32 bits. Because of this, incase we try to run hv-gpci event we get error. In power9 machine: command#: perf stat -e hv_gpci/system_tlbie_count_and_time_tlbie_instructions_issued/ -C 0 -I 1000 event syntax error: '..bie_count_and_time_tlbie_instructions_issued/' \___ value too big for format, maximum is 4294967295 This patch fix this issue and changes starting_index value to '0xffffffff' After this patch: command#: perf stat -e hv_gpci/system_tlbie_count_and_time_tlbie_instructions_issued/ -C 0 -I 1000 1.000085786 1,024 hv_gpci/system_tlbie_count_and_time_tlbie_instructions_issued/ 2.000287818 1,024 hv_gpci/system_tlbie_count_and_time_tlbie_instructions_issued/ 2.439113909 17,408 hv_gpci/system_tlbie_count_and_time_tlbie_instructions_issued/ Fixes: 9e9f6010 ("powerpc/perf/{hv-gpci, hv-common}: generate requests with counters annotated") Signed-off-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201003074943.338618-1-kjain@linux.ibm.com
-
Oliver O'Halloran authored
If the RTAS call to query the PE address for a device fails we jump the err: label where an error message is printed along with the return code. However, the printed return code is from the "ret" variable which isn't set at that point since we assigned the result to "addr" instead. Fix this by consistently using the "ret" variable for the result of the RTAS call helpers an dropping the "addr" local variable" Fixes: 98ba956f ("powerpc/pseries/eeh: Rework device EEH PE determination") Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201007040903.819081-2-oohall@gmail.com
-
Oliver O'Halloran authored
The eeh_pe->config_addr field was supposed to be removed in commit 35d64734 ("powerpc/eeh: Clean up PE addressing") which made it largely unused. Finish the job. Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201007040903.819081-1-oohall@gmail.com
-
- 06 Oct, 2020 11 commits
-
-
Scott Cheloha authored
During memory hot-add, dlpar_add_lmb() calls memory_add_physaddr_to_nid() to determine which node id (nid) to use when later calling __add_memory(). This is wasteful. On pseries, memory_add_physaddr_to_nid() finds an appropriate nid for a given address by looking up the LMB containing the address and then passing that LMB to of_drconf_to_nid_single() to get the nid. In dlpar_add_lmb() we get this address from the LMB itself. In short, we have a pointer to an LMB and then we are searching for that LMB *again* in order to find its nid. If we call of_drconf_to_nid_single() directly from dlpar_add_lmb() we can skip the redundant lookup. The only error handling we need to duplicate from memory_add_physaddr_to_nid() is the fallback to the default nid when drconf_to_nid_single() returns -1 (NUMA_NO_NODE) or an invalid nid. Skipping the extra lookup makes hot-add operations faster, especially on machines with many LMBs. Consider an LPAR with 126976 LMBs. In one test, hot-adding 126000 LMBs on an upatched kernel took ~3.5 hours while a patched kernel completed the same operation in ~2 hours: Unpatched (12450 seconds): Sep 9 04:06:31 ltc-brazos1 drmgr[810169]: drmgr: -c mem -a -q 126000 Sep 9 04:06:31 ltc-brazos1 kernel: pseries-hotplug-mem: Attempting to hot-add 126000 LMB(s) [...] Sep 9 07:34:01 ltc-brazos1 kernel: pseries-hotplug-mem: Memory at 20000000 (drc index 80000002) was hot-added Patched (7065 seconds): Sep 8 21:49:57 ltc-brazos1 drmgr[877703]: drmgr: -c mem -a -q 126000 Sep 8 21:49:57 ltc-brazos1 kernel: pseries-hotplug-mem: Attempting to hot-add 126000 LMB(s) [...] Sep 8 23:27:42 ltc-brazos1 kernel: pseries-hotplug-mem: Memory at 20000000 (drc index 80000002) was hot-added It should be noted that the speedup grows more substantial when hot-adding LMBs at the end of the drconf range. This is because we are skipping a linear LMB search. To see the distinction, consider smaller hot-add test on the same LPAR. A perf-stat run with 10 iterations showed that hot-adding 4096 LMBs completed less than 1 second faster on a patched kernel: Unpatched: Performance counter stats for 'drmgr -c mem -a -q 4096' (10 runs): 104,753.42 msec task-clock # 0.992 CPUs utilized ( +- 0.55% ) 4,708 context-switches # 0.045 K/sec ( +- 0.69% ) 2,444 cpu-migrations # 0.023 K/sec ( +- 1.25% ) 394 page-faults # 0.004 K/sec ( +- 0.22% ) 445,902,503,057 cycles # 4.257 GHz ( +- 0.55% ) (66.67%) 8,558,376,740 stalled-cycles-frontend # 1.92% frontend cycles idle ( +- 0.88% ) (49.99%) 300,346,181,651 stalled-cycles-backend # 67.36% backend cycles idle ( +- 0.76% ) (50.01%) 258,091,488,691 instructions # 0.58 insn per cycle # 1.16 stalled cycles per insn ( +- 0.22% ) (66.67%) 70,568,169,256 branches # 673.660 M/sec ( +- 0.17% ) (50.01%) 3,100,725,426 branch-misses # 4.39% of all branches ( +- 0.20% ) (49.99%) 105.583 +- 0.589 seconds time elapsed ( +- 0.56% ) Patched: Performance counter stats for 'drmgr -c mem -a -q 4096' (10 runs): 104,055.69 msec task-clock # 0.993 CPUs utilized ( +- 0.32% ) 4,606 context-switches # 0.044 K/sec ( +- 0.20% ) 2,463 cpu-migrations # 0.024 K/sec ( +- 0.93% ) 394 page-faults # 0.004 K/sec ( +- 0.25% ) 442,951,129,921 cycles # 4.257 GHz ( +- 0.32% ) (66.66%) 8,710,413,329 stalled-cycles-frontend # 1.97% frontend cycles idle ( +- 0.47% ) (50.06%) 299,656,905,836 stalled-cycles-backend # 67.65% backend cycles idle ( +- 0.39% ) (50.02%) 252,731,168,193 instructions # 0.57 insn per cycle # 1.19 stalled cycles per insn ( +- 0.20% ) (66.66%) 68,902,851,121 branches # 662.173 M/sec ( +- 0.13% ) (49.94%) 3,100,242,882 branch-misses # 4.50% of all branches ( +- 0.15% ) (49.98%) 104.829 +- 0.325 seconds time elapsed ( +- 0.31% ) This is consistent. An add-by-count hot-add operation adds LMBs greedily, so LMBs near the start of the drconf range are considered first. On an otherwise idle LPAR with so many LMBs we would expect to find the LMBs we need near the start of the drconf range, hence the smaller speedup. Signed-off-by: Scott Cheloha <cheloha@linux.ibm.com> Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200916145122.3408129-1-cheloha@linux.ibm.com
-
Andrew Donnellan authored
Add a selftest to test the basic functionality of CONFIG_RTAS_FILTER. Signed-off-by: Andrew Donnellan <ajd@linux.ibm.com> [mpe: Change rmo_start/end to 32-bit to avoid build errors on ppc64] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200820044512.7543-2-ajd@linux.ibm.com
-
Andrew Donnellan authored
A number of userspace utilities depend on making calls to RTAS to retrieve information and update various things. The existing API through which we expose RTAS to userspace exposes more RTAS functionality than we actually need, through the sys_rtas syscall, which allows root (or anyone with CAP_SYS_ADMIN) to make any RTAS call they want with arbitrary arguments. Many RTAS calls take the address of a buffer as an argument, and it's up to the caller to specify the physical address of the buffer as an argument. We allocate a buffer (the "RMO buffer") in the Real Memory Area that RTAS can access, and then expose the physical address and size of this buffer in /proc/powerpc/rtas/rmo_buffer. Userspace is expected to read this address, poke at the buffer using /dev/mem, and pass an address in the RMO buffer to the RTAS call. However, there's nothing stopping the caller from specifying whatever address they want in the RTAS call, and it's easy to construct a series of RTAS calls that can overwrite arbitrary bytes (even without /dev/mem access). Additionally, there are some RTAS calls that do potentially dangerous things and for which there are no legitimate userspace use cases. In the past, this would not have been a particularly big deal as it was assumed that root could modify all system state freely, but with Secure Boot and lockdown we need to care about this. We can't fundamentally change the ABI at this point, however we can address this by implementing a filter that checks RTAS calls against a list of permitted calls and forces the caller to use addresses within the RMO buffer. The list is based off the list of calls that are used by the librtas userspace library, and has been tested with a number of existing userspace RTAS utilities. For compatibility with any applications we are not aware of that require other calls, the filter can be turned off at build time. Cc: stable@vger.kernel.org Reported-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200820044512.7543-1-ajd@linux.ibm.com
-
Athira Rajeev authored
PMU counter support functions enforces event constraints for group of events to check if all events in a group can be monitored. Incase of event codes using PMC5 and PMC6 ( 500fa and 600f4 respectively ), not all constraints are applicable, say the threshold or sample bits. But current code includes pmc5 and pmc6 in some group constraints (like IC_DC Qualifier bits) which is actually not applicable and hence results in those events not getting counted when scheduled along with group of other events. Patch fixes this by excluding PMC5/6 from constraints which are not relevant for it. Fixes: 7ffd948f ("powerpc/perf: factor out power8 pmu functions") Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Reviewed-by: Madhavan Srinivasan <maddy@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1600672204-1610-1-git-send-email-atrajeev@linux.vnet.ibm.com
-
Srikar Dronamraju authored
All threads of a SMT4/SMT8 core can either be part of CPU's coregroup mask or outside the coregroup. Use this relation to reduce the number of iterations needed to find all the CPUs that share the same coregroup Use a temporary mask to iterate through the CPUs that may share coregroup mask. Also instead of setting one CPU at a time into cpu_coregroup_mask, copy the SMT4/SMT8/submask at one shot. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200921095653.9701-12-srikar@linux.vnet.ibm.com
-
Srikar Dronamraju authored
Move the logic for updating the coregroup mask of a CPU to its own function. This will help in reworking the updation of coregroup mask in subsequent patch. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200921095653.9701-11-srikar@linux.vnet.ibm.com
-
Srikar Dronamraju authored
All threads of a SMT4 core can either be part of this CPU's l2-cache mask or not related to this CPU l2-cache mask. Use this relation to reduce the number of iterations needed to find all the CPUs that share the same l2-cache. Use a temporary mask to iterate through the CPUs that may share l2_cache mask. Also instead of setting one CPU at a time into cpu_l2_cache_mask, copy the SMT4/sub mask at one shot. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200921095653.9701-10-srikar@linux.vnet.ibm.com
-
Srikar Dronamraju authored
CACHE and COREGROUP domains are now part of default topology. However on systems that don't support CACHE or COREGROUP, these domains will eventually be degenerated. The degeneration happens per CPU. Do note the current fixup_topology() logic ensures that mask of a domain that is not supported on the current platform is set to the previous domain. Instead of waiting for the scheduler to degenerated try to consolidate based on their masks and sd_flags. This is done just before setting the scheduler topology. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200921095653.9701-9-srikar@linux.vnet.ibm.com
-
Srikar Dronamraju authored
Currently on hotplug/hotunplug, CPU iterates through all the CPUs in its core to find threads in its thread group. However this info is already captured in cpu_l1_cache_map. Hence reduce iterations and cleanup add_cpu_to_smallcore_masks function. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Tested-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200921095653.9701-8-srikar@linux.vnet.ibm.com
-
Srikar Dronamraju authored
update_mask_by_l2 is called only once. But it passes cpu_l2_cache_mask as parameter. Instead of passing cpu_l2_cache_mask, use it directly in update_mask_by_l2. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Tested-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200921095653.9701-7-srikar@linux.vnet.ibm.com
-
Srikar Dronamraju authored
All the arch specific topology cpumasks are within a node/DIE. However when setting these per CPU cpumasks, system traverses through all the online CPUs. This is redundant. Reduce the traversal to only CPUs that are online in the node to which the CPU belongs to. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Tested-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200921095653.9701-6-srikar@linux.vnet.ibm.com
-