Commit aa8c3db4 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'x86_cache_for_v6.3_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 resource control updates from Borislav Petkov:

 - Add support for a new AMD feature called slow memory bandwidth
   allocation. Its goal is to control resource allocation in external
   slow memory which is connected to the machine like for example
   through CXL devices, accelerators etc

* tag 'x86_cache_for_v6.3_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/resctrl: Fix a silly -Wunused-but-set-variable warning
  Documentation/x86: Update resctrl.rst for new features
  x86/resctrl: Add interface to write mbm_local_bytes_config
  x86/resctrl: Add interface to write mbm_total_bytes_config
  x86/resctrl: Add interface to read mbm_local_bytes_config
  x86/resctrl: Add interface to read mbm_total_bytes_config
  x86/resctrl: Support monitor configuration
  x86/resctrl: Add __init attribute to rdt_get_mon_l3_config()
  x86/resctrl: Detect and configure Slow Memory Bandwidth Allocation
  x86/resctrl: Include new features in command line options
  x86/cpufeatures: Add Bandwidth Monitoring Event Configuration feature flag
  x86/resctrl: Add a new resource type RDT_RESOURCE_SMBA
  x86/cpufeatures: Add Slow Memory Bandwidth Allocation feature flag
  x86/resctrl: Replace smp_call_function_many() with on_each_cpu_mask()
parents 1adce1b9 793207ba
...@@ -5221,7 +5221,7 @@ ...@@ -5221,7 +5221,7 @@
rdt= [HW,X86,RDT] rdt= [HW,X86,RDT]
Turn on/off individual RDT features. List is: Turn on/off individual RDT features. List is:
cmt, mbmtotal, mbmlocal, l3cat, l3cdp, l2cat, l2cdp, cmt, mbmtotal, mbmlocal, l3cat, l3cdp, l2cat, l2cdp,
mba. mba, smba, bmec.
E.g. to turn on cmt and turn off mba use: E.g. to turn on cmt and turn off mba use:
rdt=cmt,!mba rdt=cmt,!mba
......
...@@ -17,14 +17,21 @@ AMD refers to this feature as AMD Platform Quality of Service(AMD QoS). ...@@ -17,14 +17,21 @@ AMD refers to this feature as AMD Platform Quality of Service(AMD QoS).
This feature is enabled by the CONFIG_X86_CPU_RESCTRL and the x86 /proc/cpuinfo This feature is enabled by the CONFIG_X86_CPU_RESCTRL and the x86 /proc/cpuinfo
flag bits: flag bits:
============================================= ================================ =============================================== ================================
RDT (Resource Director Technology) Allocation "rdt_a" RDT (Resource Director Technology) Allocation "rdt_a"
CAT (Cache Allocation Technology) "cat_l3", "cat_l2" CAT (Cache Allocation Technology) "cat_l3", "cat_l2"
CDP (Code and Data Prioritization) "cdp_l3", "cdp_l2" CDP (Code and Data Prioritization) "cdp_l3", "cdp_l2"
CQM (Cache QoS Monitoring) "cqm_llc", "cqm_occup_llc" CQM (Cache QoS Monitoring) "cqm_llc", "cqm_occup_llc"
MBM (Memory Bandwidth Monitoring) "cqm_mbm_total", "cqm_mbm_local" MBM (Memory Bandwidth Monitoring) "cqm_mbm_total", "cqm_mbm_local"
MBA (Memory Bandwidth Allocation) "mba" MBA (Memory Bandwidth Allocation) "mba"
============================================= ================================ SMBA (Slow Memory Bandwidth Allocation) ""
BMEC (Bandwidth Monitoring Event Configuration) ""
=============================================== ================================
Historically, new features were made visible by default in /proc/cpuinfo. This
resulted in the feature flags becoming hard to parse by humans. Adding a new
flag to /proc/cpuinfo should be avoided if user space can obtain information
about the feature from resctrl's info directory.
To use the feature mount the file system:: To use the feature mount the file system::
...@@ -161,6 +168,83 @@ with the following files: ...@@ -161,6 +168,83 @@ with the following files:
"mon_features": "mon_features":
Lists the monitoring events if Lists the monitoring events if
monitoring is enabled for the resource. monitoring is enabled for the resource.
Example::
# cat /sys/fs/resctrl/info/L3_MON/mon_features
llc_occupancy
mbm_total_bytes
mbm_local_bytes
If the system supports Bandwidth Monitoring Event
Configuration (BMEC), then the bandwidth events will
be configurable. The output will be::
# cat /sys/fs/resctrl/info/L3_MON/mon_features
llc_occupancy
mbm_total_bytes
mbm_total_bytes_config
mbm_local_bytes
mbm_local_bytes_config
"mbm_total_bytes_config", "mbm_local_bytes_config":
Read/write files containing the configuration for the mbm_total_bytes
and mbm_local_bytes events, respectively, when the Bandwidth
Monitoring Event Configuration (BMEC) feature is supported.
The event configuration settings are domain specific and affect
all the CPUs in the domain. When either event configuration is
changed, the bandwidth counters for all RMIDs of both events
(mbm_total_bytes as well as mbm_local_bytes) are cleared for that
domain. The next read for every RMID will report "Unavailable"
and subsequent reads will report the valid value.
Following are the types of events supported:
==== ========================================================
Bits Description
==== ========================================================
6 Dirty Victims from the QOS domain to all types of memory
5 Reads to slow memory in the non-local NUMA domain
4 Reads to slow memory in the local NUMA domain
3 Non-temporal writes to non-local NUMA domain
2 Non-temporal writes to local NUMA domain
1 Reads to memory in the non-local NUMA domain
0 Reads to memory in the local NUMA domain
==== ========================================================
By default, the mbm_total_bytes configuration is set to 0x7f to count
all the event types and the mbm_local_bytes configuration is set to
0x15 to count all the local memory events.
Examples:
* To view the current configuration::
::
# cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
0=0x7f;1=0x7f;2=0x7f;3=0x7f
# cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
0=0x15;1=0x15;3=0x15;4=0x15
* To change the mbm_total_bytes to count only reads on domain 0,
the bits 0, 1, 4 and 5 needs to be set, which is 110011b in binary
(in hexadecimal 0x33):
::
# echo "0=0x33" > /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
# cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
0=0x33;1=0x7f;2=0x7f;3=0x7f
* To change the mbm_local_bytes to count all the slow memory reads on
domain 0 and 1, the bits 4 and 5 needs to be set, which is 110000b
in binary (in hexadecimal 0x30):
::
# echo "0=0x30;1=0x30" > /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
# cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
0=0x30;1=0x30;3=0x15;4=0x15
"max_threshold_occupancy": "max_threshold_occupancy":
Read/write file provides the largest value (in Read/write file provides the largest value (in
...@@ -464,6 +548,25 @@ Memory bandwidth domain is L3 cache. ...@@ -464,6 +548,25 @@ Memory bandwidth domain is L3 cache.
MB:<cache_id0>=bw_MBps0;<cache_id1>=bw_MBps1;... MB:<cache_id0>=bw_MBps0;<cache_id1>=bw_MBps1;...
Slow Memory Bandwidth Allocation (SMBA)
---------------------------------------
AMD hardware supports Slow Memory Bandwidth Allocation (SMBA).
CXL.memory is the only supported "slow" memory device. With the
support of SMBA, the hardware enables bandwidth allocation on
the slow memory devices. If there are multiple such devices in
the system, the throttling logic groups all the slow sources
together and applies the limit on them as a whole.
The presence of SMBA (with CXL.memory) is independent of slow memory
devices presence. If there are no such devices on the system, then
configuring SMBA will have no impact on the performance of the system.
The bandwidth domain for slow memory is L3 cache. Its schemata file
is formatted as:
::
SMBA:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;...
Reading/writing the schemata file Reading/writing the schemata file
--------------------------------- ---------------------------------
Reading the schemata file will show the state of all resources Reading the schemata file will show the state of all resources
...@@ -479,6 +582,46 @@ which you wish to change. E.g. ...@@ -479,6 +582,46 @@ which you wish to change. E.g.
L3DATA:0=fffff;1=fffff;2=3c0;3=fffff L3DATA:0=fffff;1=fffff;2=3c0;3=fffff
L3CODE:0=fffff;1=fffff;2=fffff;3=fffff L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
Reading/writing the schemata file (on AMD systems)
--------------------------------------------------
Reading the schemata file will show the current bandwidth limit on all
domains. The allocated resources are in multiples of one eighth GB/s.
When writing to the file, you need to specify what cache id you wish to
configure the bandwidth limit.
For example, to allocate 2GB/s limit on the first cache id:
::
# cat schemata
MB:0=2048;1=2048;2=2048;3=2048
L3:0=ffff;1=ffff;2=ffff;3=ffff
# echo "MB:1=16" > schemata
# cat schemata
MB:0=2048;1= 16;2=2048;3=2048
L3:0=ffff;1=ffff;2=ffff;3=ffff
Reading/writing the schemata file (on AMD systems) with SMBA feature
--------------------------------------------------------------------
Reading and writing the schemata file is the same as without SMBA in
above section.
For example, to allocate 8GB/s limit on the first cache id:
::
# cat schemata
SMBA:0=2048;1=2048;2=2048;3=2048
MB:0=2048;1=2048;2=2048;3=2048
L3:0=ffff;1=ffff;2=ffff;3=ffff
# echo "SMBA:1=64" > schemata
# cat schemata
SMBA:0=2048;1= 64;2=2048;3=2048
MB:0=2048;1=2048;2=2048;3=2048
L3:0=ffff;1=ffff;2=ffff;3=ffff
Cache Pseudo-Locking Cache Pseudo-Locking
==================== ====================
CAT enables a user to specify the amount of cache space that an CAT enables a user to specify the amount of cache space that an
......
...@@ -307,6 +307,8 @@ ...@@ -307,6 +307,8 @@
#define X86_FEATURE_SGX_EDECCSSA (11*32+18) /* "" SGX EDECCSSA user leaf function */ #define X86_FEATURE_SGX_EDECCSSA (11*32+18) /* "" SGX EDECCSSA user leaf function */
#define X86_FEATURE_CALL_DEPTH (11*32+19) /* "" Call depth tracking for RSB stuffing */ #define X86_FEATURE_CALL_DEPTH (11*32+19) /* "" Call depth tracking for RSB stuffing */
#define X86_FEATURE_MSR_TSX_CTRL (11*32+20) /* "" MSR IA32_TSX_CTRL (Intel) implemented */ #define X86_FEATURE_MSR_TSX_CTRL (11*32+20) /* "" MSR IA32_TSX_CTRL (Intel) implemented */
#define X86_FEATURE_SMBA (11*32+21) /* "" Slow Memory Bandwidth Allocation */
#define X86_FEATURE_BMEC (11*32+22) /* "" Bandwidth Monitoring Event Configuration */
/* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
#define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */ #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */
......
...@@ -1084,6 +1084,8 @@ ...@@ -1084,6 +1084,8 @@
/* - AMD: */ /* - AMD: */
#define MSR_IA32_MBA_BW_BASE 0xc0000200 #define MSR_IA32_MBA_BW_BASE 0xc0000200
#define MSR_IA32_SMBA_BW_BASE 0xc0000280
#define MSR_IA32_EVT_CFG_BASE 0xc0000400
/* MSR_IA32_VMX_MISC bits */ /* MSR_IA32_VMX_MISC bits */
#define MSR_IA32_VMX_MISC_INTEL_PT (1ULL << 14) #define MSR_IA32_VMX_MISC_INTEL_PT (1ULL << 14)
......
...@@ -68,6 +68,8 @@ static const struct cpuid_dep cpuid_deps[] = { ...@@ -68,6 +68,8 @@ static const struct cpuid_dep cpuid_deps[] = {
{ X86_FEATURE_CQM_OCCUP_LLC, X86_FEATURE_CQM_LLC }, { X86_FEATURE_CQM_OCCUP_LLC, X86_FEATURE_CQM_LLC },
{ X86_FEATURE_CQM_MBM_TOTAL, X86_FEATURE_CQM_LLC }, { X86_FEATURE_CQM_MBM_TOTAL, X86_FEATURE_CQM_LLC },
{ X86_FEATURE_CQM_MBM_LOCAL, X86_FEATURE_CQM_LLC }, { X86_FEATURE_CQM_MBM_LOCAL, X86_FEATURE_CQM_LLC },
{ X86_FEATURE_BMEC, X86_FEATURE_CQM_MBM_TOTAL },
{ X86_FEATURE_BMEC, X86_FEATURE_CQM_MBM_LOCAL },
{ X86_FEATURE_AVX512_BF16, X86_FEATURE_AVX512VL }, { X86_FEATURE_AVX512_BF16, X86_FEATURE_AVX512VL },
{ X86_FEATURE_AVX512_FP16, X86_FEATURE_AVX512BW }, { X86_FEATURE_AVX512_FP16, X86_FEATURE_AVX512BW },
{ X86_FEATURE_ENQCMD, X86_FEATURE_XSAVES }, { X86_FEATURE_ENQCMD, X86_FEATURE_XSAVES },
......
...@@ -100,6 +100,18 @@ struct rdt_hw_resource rdt_resources_all[] = { ...@@ -100,6 +100,18 @@ struct rdt_hw_resource rdt_resources_all[] = {
.fflags = RFTYPE_RES_MB, .fflags = RFTYPE_RES_MB,
}, },
}, },
[RDT_RESOURCE_SMBA] =
{
.r_resctrl = {
.rid = RDT_RESOURCE_SMBA,
.name = "SMBA",
.cache_level = 3,
.domains = domain_init(RDT_RESOURCE_SMBA),
.parse_ctrlval = parse_bw,
.format_str = "%d=%*u",
.fflags = RFTYPE_RES_MB,
},
},
}; };
/* /*
...@@ -150,6 +162,13 @@ bool is_mba_sc(struct rdt_resource *r) ...@@ -150,6 +162,13 @@ bool is_mba_sc(struct rdt_resource *r)
if (!r) if (!r)
return rdt_resources_all[RDT_RESOURCE_MBA].r_resctrl.membw.mba_sc; return rdt_resources_all[RDT_RESOURCE_MBA].r_resctrl.membw.mba_sc;
/*
* The software controller support is only applicable to MBA resource.
* Make sure to check for resource type.
*/
if (r->rid != RDT_RESOURCE_MBA)
return false;
return r->membw.mba_sc; return r->membw.mba_sc;
} }
...@@ -213,9 +232,15 @@ static bool __rdt_get_mem_config_amd(struct rdt_resource *r) ...@@ -213,9 +232,15 @@ static bool __rdt_get_mem_config_amd(struct rdt_resource *r)
struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r); struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
union cpuid_0x10_3_eax eax; union cpuid_0x10_3_eax eax;
union cpuid_0x10_x_edx edx; union cpuid_0x10_x_edx edx;
u32 ebx, ecx; u32 ebx, ecx, subleaf;
/*
* Query CPUID_Fn80000020_EDX_x01 for MBA and
* CPUID_Fn80000020_EDX_x02 for SMBA
*/
subleaf = (r->rid == RDT_RESOURCE_SMBA) ? 2 : 1;
cpuid_count(0x80000020, 1, &eax.full, &ebx, &ecx, &edx.full); cpuid_count(0x80000020, subleaf, &eax.full, &ebx, &ecx, &edx.full);
hw_res->num_closid = edx.split.cos_max + 1; hw_res->num_closid = edx.split.cos_max + 1;
r->default_ctrl = MAX_MBA_BW_AMD; r->default_ctrl = MAX_MBA_BW_AMD;
...@@ -647,6 +672,8 @@ enum { ...@@ -647,6 +672,8 @@ enum {
RDT_FLAG_L2_CAT, RDT_FLAG_L2_CAT,
RDT_FLAG_L2_CDP, RDT_FLAG_L2_CDP,
RDT_FLAG_MBA, RDT_FLAG_MBA,
RDT_FLAG_SMBA,
RDT_FLAG_BMEC,
}; };
#define RDT_OPT(idx, n, f) \ #define RDT_OPT(idx, n, f) \
...@@ -670,6 +697,8 @@ static struct rdt_options rdt_options[] __initdata = { ...@@ -670,6 +697,8 @@ static struct rdt_options rdt_options[] __initdata = {
RDT_OPT(RDT_FLAG_L2_CAT, "l2cat", X86_FEATURE_CAT_L2), RDT_OPT(RDT_FLAG_L2_CAT, "l2cat", X86_FEATURE_CAT_L2),
RDT_OPT(RDT_FLAG_L2_CDP, "l2cdp", X86_FEATURE_CDP_L2), RDT_OPT(RDT_FLAG_L2_CDP, "l2cdp", X86_FEATURE_CDP_L2),
RDT_OPT(RDT_FLAG_MBA, "mba", X86_FEATURE_MBA), RDT_OPT(RDT_FLAG_MBA, "mba", X86_FEATURE_MBA),
RDT_OPT(RDT_FLAG_SMBA, "smba", X86_FEATURE_SMBA),
RDT_OPT(RDT_FLAG_BMEC, "bmec", X86_FEATURE_BMEC),
}; };
#define NUM_RDT_OPTIONS ARRAY_SIZE(rdt_options) #define NUM_RDT_OPTIONS ARRAY_SIZE(rdt_options)
...@@ -699,7 +728,7 @@ static int __init set_rdt_options(char *str) ...@@ -699,7 +728,7 @@ static int __init set_rdt_options(char *str)
} }
__setup("rdt", set_rdt_options); __setup("rdt", set_rdt_options);
static bool __init rdt_cpu_has(int flag) bool __init rdt_cpu_has(int flag)
{ {
bool ret = boot_cpu_has(flag); bool ret = boot_cpu_has(flag);
struct rdt_options *o; struct rdt_options *o;
...@@ -734,6 +763,19 @@ static __init bool get_mem_config(void) ...@@ -734,6 +763,19 @@ static __init bool get_mem_config(void)
return false; return false;
} }
static __init bool get_slow_mem_config(void)
{
struct rdt_hw_resource *hw_res = &rdt_resources_all[RDT_RESOURCE_SMBA];
if (!rdt_cpu_has(X86_FEATURE_SMBA))
return false;
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
return __rdt_get_mem_config_amd(&hw_res->r_resctrl);
return false;
}
static __init bool get_rdt_alloc_resources(void) static __init bool get_rdt_alloc_resources(void)
{ {
struct rdt_resource *r; struct rdt_resource *r;
...@@ -764,6 +806,9 @@ static __init bool get_rdt_alloc_resources(void) ...@@ -764,6 +806,9 @@ static __init bool get_rdt_alloc_resources(void)
if (get_mem_config()) if (get_mem_config())
ret = true; ret = true;
if (get_slow_mem_config())
ret = true;
return ret; return ret;
} }
...@@ -853,6 +898,9 @@ static __init void rdt_init_res_defs_amd(void) ...@@ -853,6 +898,9 @@ static __init void rdt_init_res_defs_amd(void)
} else if (r->rid == RDT_RESOURCE_MBA) { } else if (r->rid == RDT_RESOURCE_MBA) {
hw_res->msr_base = MSR_IA32_MBA_BW_BASE; hw_res->msr_base = MSR_IA32_MBA_BW_BASE;
hw_res->msr_update = mba_wrmsr_amd; hw_res->msr_update = mba_wrmsr_amd;
} else if (r->rid == RDT_RESOURCE_SMBA) {
hw_res->msr_base = MSR_IA32_SMBA_BW_BASE;
hw_res->msr_update = mba_wrmsr_amd;
} }
} }
} }
......
...@@ -209,7 +209,7 @@ static int parse_line(char *line, struct resctrl_schema *s, ...@@ -209,7 +209,7 @@ static int parse_line(char *line, struct resctrl_schema *s,
unsigned long dom_id; unsigned long dom_id;
if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP && if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP &&
r->rid == RDT_RESOURCE_MBA) { (r->rid == RDT_RESOURCE_MBA || r->rid == RDT_RESOURCE_SMBA)) {
rdt_last_cmd_puts("Cannot pseudo-lock MBA resource\n"); rdt_last_cmd_puts("Cannot pseudo-lock MBA resource\n");
return -EINVAL; return -EINVAL;
} }
...@@ -310,7 +310,6 @@ int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid) ...@@ -310,7 +310,6 @@ int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid)
enum resctrl_conf_type t; enum resctrl_conf_type t;
cpumask_var_t cpu_mask; cpumask_var_t cpu_mask;
struct rdt_domain *d; struct rdt_domain *d;
int cpu;
u32 idx; u32 idx;
if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL)) if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL))
...@@ -341,13 +340,9 @@ int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid) ...@@ -341,13 +340,9 @@ int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid)
if (cpumask_empty(cpu_mask)) if (cpumask_empty(cpu_mask))
goto done; goto done;
cpu = get_cpu();
/* Update resource control msr on this CPU if it's in cpu_mask. */ /* Update resource control msr on all the CPUs. */
if (cpumask_test_cpu(cpu, cpu_mask)) on_each_cpu_mask(cpu_mask, rdt_ctrl_update, &msr_param, 1);
rdt_ctrl_update(&msr_param);
/* Update resource control msr on other CPUs. */
smp_call_function_many(cpu_mask, rdt_ctrl_update, &msr_param, 1);
put_cpu();
done: done:
free_cpumask_var(cpu_mask); free_cpumask_var(cpu_mask);
......
...@@ -30,6 +30,29 @@ ...@@ -30,6 +30,29 @@
*/ */
#define MBM_CNTR_WIDTH_OFFSET_MAX (62 - MBM_CNTR_WIDTH_BASE) #define MBM_CNTR_WIDTH_OFFSET_MAX (62 - MBM_CNTR_WIDTH_BASE)
/* Reads to Local DRAM Memory */
#define READS_TO_LOCAL_MEM BIT(0)
/* Reads to Remote DRAM Memory */
#define READS_TO_REMOTE_MEM BIT(1)
/* Non-Temporal Writes to Local Memory */
#define NON_TEMP_WRITE_TO_LOCAL_MEM BIT(2)
/* Non-Temporal Writes to Remote Memory */
#define NON_TEMP_WRITE_TO_REMOTE_MEM BIT(3)
/* Reads to Local Memory the system identifies as "Slow Memory" */
#define READS_TO_LOCAL_S_MEM BIT(4)
/* Reads to Remote Memory the system identifies as "Slow Memory" */
#define READS_TO_REMOTE_S_MEM BIT(5)
/* Dirty Victims to All Types of Memory */
#define DIRTY_VICTIMS_TO_ALL_MEM BIT(6)
/* Max event bits supported */
#define MAX_EVT_CONFIG_BITS GENMASK(6, 0)
struct rdt_fs_context { struct rdt_fs_context {
struct kernfs_fs_context kfc; struct kernfs_fs_context kfc;
...@@ -52,11 +75,13 @@ DECLARE_STATIC_KEY_FALSE(rdt_mon_enable_key); ...@@ -52,11 +75,13 @@ DECLARE_STATIC_KEY_FALSE(rdt_mon_enable_key);
* struct mon_evt - Entry in the event list of a resource * struct mon_evt - Entry in the event list of a resource
* @evtid: event id * @evtid: event id
* @name: name of the event * @name: name of the event
* @configurable: true if the event is configurable
* @list: entry in &rdt_resource->evt_list * @list: entry in &rdt_resource->evt_list
*/ */
struct mon_evt { struct mon_evt {
enum resctrl_event_id evtid; enum resctrl_event_id evtid;
char *name; char *name;
bool configurable;
struct list_head list; struct list_head list;
}; };
...@@ -409,6 +434,7 @@ enum resctrl_res_level { ...@@ -409,6 +434,7 @@ enum resctrl_res_level {
RDT_RESOURCE_L3, RDT_RESOURCE_L3,
RDT_RESOURCE_L2, RDT_RESOURCE_L2,
RDT_RESOURCE_MBA, RDT_RESOURCE_MBA,
RDT_RESOURCE_SMBA,
/* Must be the last */ /* Must be the last */
RDT_NUM_RESOURCES, RDT_NUM_RESOURCES,
...@@ -511,6 +537,7 @@ void closid_free(int closid); ...@@ -511,6 +537,7 @@ void closid_free(int closid);
int alloc_rmid(void); int alloc_rmid(void);
void free_rmid(u32 rmid); void free_rmid(u32 rmid);
int rdt_get_mon_l3_config(struct rdt_resource *r); int rdt_get_mon_l3_config(struct rdt_resource *r);
bool __init rdt_cpu_has(int flag);
void mon_event_count(void *info); void mon_event_count(void *info);
int rdtgroup_mondata_show(struct seq_file *m, void *arg); int rdtgroup_mondata_show(struct seq_file *m, void *arg);
void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, void mon_event_read(struct rmid_read *rr, struct rdt_resource *r,
...@@ -527,5 +554,6 @@ bool has_busy_rmid(struct rdt_resource *r, struct rdt_domain *d); ...@@ -527,5 +554,6 @@ bool has_busy_rmid(struct rdt_resource *r, struct rdt_domain *d);
void __check_limbo(struct rdt_domain *d, bool force_free); void __check_limbo(struct rdt_domain *d, bool force_free);
void rdt_domain_reconfigure_cdp(struct rdt_resource *r); void rdt_domain_reconfigure_cdp(struct rdt_resource *r);
void __init thread_throttle_mode_init(void); void __init thread_throttle_mode_init(void);
void __init mbm_config_rftype_init(const char *config);
#endif /* _ASM_X86_RESCTRL_INTERNAL_H */ #endif /* _ASM_X86_RESCTRL_INTERNAL_H */
...@@ -204,6 +204,23 @@ void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_domain *d, ...@@ -204,6 +204,23 @@ void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_domain *d,
} }
} }
/*
* Assumes that hardware counters are also reset and thus that there is
* no need to record initial non-zero counts.
*/
void resctrl_arch_reset_rmid_all(struct rdt_resource *r, struct rdt_domain *d)
{
struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
if (is_mbm_total_enabled())
memset(hw_dom->arch_mbm_total, 0,
sizeof(*hw_dom->arch_mbm_total) * r->num_rmid);
if (is_mbm_local_enabled())
memset(hw_dom->arch_mbm_local, 0,
sizeof(*hw_dom->arch_mbm_local) * r->num_rmid);
}
static u64 mbm_overflow_count(u64 prev_msr, u64 cur_msr, unsigned int width) static u64 mbm_overflow_count(u64 prev_msr, u64 cur_msr, unsigned int width)
{ {
u64 shift = 64 - width, chunks; u64 shift = 64 - width, chunks;
...@@ -763,7 +780,7 @@ static void l3_mon_evt_init(struct rdt_resource *r) ...@@ -763,7 +780,7 @@ static void l3_mon_evt_init(struct rdt_resource *r)
list_add_tail(&mbm_local_event.list, &r->evt_list); list_add_tail(&mbm_local_event.list, &r->evt_list);
} }
int rdt_get_mon_l3_config(struct rdt_resource *r) int __init rdt_get_mon_l3_config(struct rdt_resource *r)
{ {
unsigned int mbm_offset = boot_cpu_data.x86_cache_mbm_width_offset; unsigned int mbm_offset = boot_cpu_data.x86_cache_mbm_width_offset;
struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r); struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
...@@ -800,6 +817,17 @@ int rdt_get_mon_l3_config(struct rdt_resource *r) ...@@ -800,6 +817,17 @@ int rdt_get_mon_l3_config(struct rdt_resource *r)
if (ret) if (ret)
return ret; return ret;
if (rdt_cpu_has(X86_FEATURE_BMEC)) {
if (rdt_cpu_has(X86_FEATURE_CQM_MBM_TOTAL)) {
mbm_total_event.configurable = true;
mbm_config_rftype_init("mbm_total_bytes_config");
}
if (rdt_cpu_has(X86_FEATURE_CQM_MBM_LOCAL)) {
mbm_local_event.configurable = true;
mbm_config_rftype_init("mbm_local_bytes_config");
}
}
l3_mon_evt_init(r); l3_mon_evt_init(r);
r->mon_capable = true; r->mon_capable = true;
......
This diff is collapsed.
...@@ -45,6 +45,8 @@ static const struct cpuid_bit cpuid_bits[] = { ...@@ -45,6 +45,8 @@ static const struct cpuid_bit cpuid_bits[] = {
{ X86_FEATURE_CPB, CPUID_EDX, 9, 0x80000007, 0 }, { X86_FEATURE_CPB, CPUID_EDX, 9, 0x80000007, 0 },
{ X86_FEATURE_PROC_FEEDBACK, CPUID_EDX, 11, 0x80000007, 0 }, { X86_FEATURE_PROC_FEEDBACK, CPUID_EDX, 11, 0x80000007, 0 },
{ X86_FEATURE_MBA, CPUID_EBX, 6, 0x80000008, 0 }, { X86_FEATURE_MBA, CPUID_EBX, 6, 0x80000008, 0 },
{ X86_FEATURE_SMBA, CPUID_EBX, 2, 0x80000020, 0 },
{ X86_FEATURE_BMEC, CPUID_EBX, 3, 0x80000020, 0 },
{ X86_FEATURE_PERFMON_V2, CPUID_EAX, 0, 0x80000022, 0 }, { X86_FEATURE_PERFMON_V2, CPUID_EAX, 0, 0x80000022, 0 },
{ X86_FEATURE_AMD_LBR_V2, CPUID_EAX, 1, 0x80000022, 0 }, { X86_FEATURE_AMD_LBR_V2, CPUID_EAX, 1, 0x80000022, 0 },
{ 0, 0, 0, 0, 0 } { 0, 0, 0, 0, 0 }
......
...@@ -250,6 +250,17 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d, ...@@ -250,6 +250,17 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_domain *d, void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_domain *d,
u32 rmid, enum resctrl_event_id eventid); u32 rmid, enum resctrl_event_id eventid);
/**
* resctrl_arch_reset_rmid_all() - Reset all private state associated with
* all rmids and eventids.
* @r: The resctrl resource.
* @d: The domain for which all architectural counter state will
* be cleared.
*
* This can be called from any CPU.
*/
void resctrl_arch_reset_rmid_all(struct rdt_resource *r, struct rdt_domain *d);
extern unsigned int resctrl_rmid_realloc_threshold; extern unsigned int resctrl_rmid_realloc_threshold;
extern unsigned int resctrl_rmid_realloc_limit; extern unsigned int resctrl_rmid_realloc_limit;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment