Commit fab88961 authored by Ian Rogers's avatar Ian Rogers Committed by Namhyung Kim

perf vendor events: Add/update icelakex events/metrics

Update events from v1.24 to v1.26.
Add TMA metrics v4.8.

Bring in the event updates v1.26:
https://github.com/intel/perfmon/commit/c607c739e05f2569f95998cc98e1283f042b4fd1
v1.25:
https://github.com/intel/perfmon/commit/42d996769069921ec06f6fbb600b0c663b9ec5a9

The TMA 4.8 information was added in:
https://github.com/intel/perfmon/commit/59194d4d90ca50a3fcb2de0d82b9f6fc0c9a5736

Adds the event SW_PREFETCH_ACCESS.ANY.
Co-authored-by: default avatarWeilin Wang <weilin.wang@intel.com>
Co-authored-by: default avatarCaleb Biggers <caleb.biggers@intel.com>
Signed-off-by: default avatarIan Rogers <irogers@google.com>
Reviewed-by: default avatarKan Liang <kan.liang@linux.intel.com>
Cc: Alexandre Torgue <alexandre.torgue@foss.st.com>
Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com>
Signed-off-by: default avatarNamhyung Kim <namhyung@kernel.org>
Link: https://lore.kernel.org/r/20240620181752.3945845-18-irogers@google.com
parent 91b59892
[ [
{ {
"BriefDescription": "Counts the number of cache lines replaced in L1 data cache.", "BriefDescription": "Counts the number of cache lines replaced in L1 data cache.",
"Counter": "0,1,2,3",
"EventCode": "0x51", "EventCode": "0x51",
"EventName": "L1D.REPLACEMENT", "EventName": "L1D.REPLACEMENT",
"PublicDescription": "Counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replace.", "PublicDescription": "Counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replace.",
...@@ -9,6 +10,7 @@ ...@@ -9,6 +10,7 @@
}, },
{ {
"BriefDescription": "Number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability.", "BriefDescription": "Number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability.",
"Counter": "0,1,2,3",
"EventCode": "0x48", "EventCode": "0x48",
"EventName": "L1D_PEND_MISS.FB_FULL", "EventName": "L1D_PEND_MISS.FB_FULL",
"PublicDescription": "Counts number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.", "PublicDescription": "Counts number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.",
...@@ -17,6 +19,7 @@ ...@@ -17,6 +19,7 @@
}, },
{ {
"BriefDescription": "Number of phases a demand request has waited due to L1D Fill Buffer (FB) unavailability.", "BriefDescription": "Number of phases a demand request has waited due to L1D Fill Buffer (FB) unavailability.",
"Counter": "0,1,2,3",
"CounterMask": "1", "CounterMask": "1",
"EdgeDetect": "1", "EdgeDetect": "1",
"EventCode": "0x48", "EventCode": "0x48",
...@@ -27,6 +30,7 @@ ...@@ -27,6 +30,7 @@
}, },
{ {
"BriefDescription": "Number of cycles a demand request has waited due to L1D due to lack of L2 resources.", "BriefDescription": "Number of cycles a demand request has waited due to L1D due to lack of L2 resources.",
"Counter": "0,1,2,3",
"EventCode": "0x48", "EventCode": "0x48",
"EventName": "L1D_PEND_MISS.L2_STALL", "EventName": "L1D_PEND_MISS.L2_STALL",
"PublicDescription": "Counts number of cycles a demand request has waited due to L1D due to lack of L2 resources. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.", "PublicDescription": "Counts number of cycles a demand request has waited due to L1D due to lack of L2 resources. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.",
...@@ -35,6 +39,7 @@ ...@@ -35,6 +39,7 @@
}, },
{ {
"BriefDescription": "Number of L1D misses that are outstanding", "BriefDescription": "Number of L1D misses that are outstanding",
"Counter": "0,1,2,3",
"EventCode": "0x48", "EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING", "EventName": "L1D_PEND_MISS.PENDING",
"PublicDescription": "Counts number of L1D misses that are outstanding in each cycle, that is each cycle the number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand from the demand Hit FB, if it is allocated by hardware or software prefetch. Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.", "PublicDescription": "Counts number of L1D misses that are outstanding in each cycle, that is each cycle the number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand from the demand Hit FB, if it is allocated by hardware or software prefetch. Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.",
...@@ -43,6 +48,7 @@ ...@@ -43,6 +48,7 @@
}, },
{ {
"BriefDescription": "Cycles with L1D load Misses outstanding.", "BriefDescription": "Cycles with L1D load Misses outstanding.",
"Counter": "0,1,2,3",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0x48", "EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES", "EventName": "L1D_PEND_MISS.PENDING_CYCLES",
...@@ -52,6 +58,7 @@ ...@@ -52,6 +58,7 @@
}, },
{ {
"BriefDescription": "L2 cache lines filling L2", "BriefDescription": "L2 cache lines filling L2",
"Counter": "0,1,2,3",
"EventCode": "0xF1", "EventCode": "0xF1",
"EventName": "L2_LINES_IN.ALL", "EventName": "L2_LINES_IN.ALL",
"PublicDescription": "Counts the number of L2 cache lines filling the L2. Counting does not cover rejects.", "PublicDescription": "Counts the number of L2 cache lines filling the L2. Counting does not cover rejects.",
...@@ -60,6 +67,7 @@ ...@@ -60,6 +67,7 @@
}, },
{ {
"BriefDescription": "Cache lines that are evicted by L2 cache when triggered by an L2 cache fill.", "BriefDescription": "Cache lines that are evicted by L2 cache when triggered by an L2 cache fill.",
"Counter": "0,1,2,3",
"EventCode": "0xF2", "EventCode": "0xF2",
"EventName": "L2_LINES_OUT.NON_SILENT", "EventName": "L2_LINES_OUT.NON_SILENT",
"PublicDescription": "Counts the number of lines that are evicted by the L2 cache due to L2 cache fills. Evicted lines are delivered to the L3, which may or may not cache them, according to system load and priorities.", "PublicDescription": "Counts the number of lines that are evicted by the L2 cache due to L2 cache fills. Evicted lines are delivered to the L3, which may or may not cache them, according to system load and priorities.",
...@@ -68,6 +76,7 @@ ...@@ -68,6 +76,7 @@
}, },
{ {
"BriefDescription": "Non-modified cache lines that are silently dropped by L2 cache when triggered by an L2 cache fill.", "BriefDescription": "Non-modified cache lines that are silently dropped by L2 cache when triggered by an L2 cache fill.",
"Counter": "0,1,2,3",
"EventCode": "0xF2", "EventCode": "0xF2",
"EventName": "L2_LINES_OUT.SILENT", "EventName": "L2_LINES_OUT.SILENT",
"PublicDescription": "Counts the number of lines that are silently dropped by L2 cache when triggered by an L2 cache fill. These lines are typically in Shared or Exclusive state. A non-threaded event.", "PublicDescription": "Counts the number of lines that are silently dropped by L2 cache when triggered by an L2 cache fill. These lines are typically in Shared or Exclusive state. A non-threaded event.",
...@@ -76,6 +85,7 @@ ...@@ -76,6 +85,7 @@
}, },
{ {
"BriefDescription": "L2 code requests", "BriefDescription": "L2 code requests",
"Counter": "0,1,2,3",
"EventCode": "0x24", "EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_CODE_RD", "EventName": "L2_RQSTS.ALL_CODE_RD",
"PublicDescription": "Counts the total number of L2 code requests.", "PublicDescription": "Counts the total number of L2 code requests.",
...@@ -84,6 +94,7 @@ ...@@ -84,6 +94,7 @@
}, },
{ {
"BriefDescription": "Demand Data Read requests", "BriefDescription": "Demand Data Read requests",
"Counter": "0,1,2,3",
"EventCode": "0x24", "EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD", "EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD",
"PublicDescription": "Counts the number of demand Data Read requests (including requests from L1D hardware prefetchers). These loads may hit or miss L2 cache. Only non rejected loads are counted.", "PublicDescription": "Counts the number of demand Data Read requests (including requests from L1D hardware prefetchers). These loads may hit or miss L2 cache. Only non rejected loads are counted.",
...@@ -92,6 +103,7 @@ ...@@ -92,6 +103,7 @@
}, },
{ {
"BriefDescription": "Demand requests that miss L2 cache", "BriefDescription": "Demand requests that miss L2 cache",
"Counter": "0,1,2,3",
"EventCode": "0x24", "EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_MISS", "EventName": "L2_RQSTS.ALL_DEMAND_MISS",
"PublicDescription": "Counts demand requests that miss L2 cache.", "PublicDescription": "Counts demand requests that miss L2 cache.",
...@@ -100,6 +112,7 @@ ...@@ -100,6 +112,7 @@
}, },
{ {
"BriefDescription": "RFO requests to L2 cache", "BriefDescription": "RFO requests to L2 cache",
"Counter": "0,1,2,3",
"EventCode": "0x24", "EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_RFO", "EventName": "L2_RQSTS.ALL_RFO",
"PublicDescription": "Counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches.", "PublicDescription": "Counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches.",
...@@ -108,6 +121,7 @@ ...@@ -108,6 +121,7 @@
}, },
{ {
"BriefDescription": "L2 cache hits when fetching instructions, code reads.", "BriefDescription": "L2 cache hits when fetching instructions, code reads.",
"Counter": "0,1,2,3",
"EventCode": "0x24", "EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_HIT", "EventName": "L2_RQSTS.CODE_RD_HIT",
"PublicDescription": "Counts L2 cache hits when fetching instructions, code reads.", "PublicDescription": "Counts L2 cache hits when fetching instructions, code reads.",
...@@ -116,6 +130,7 @@ ...@@ -116,6 +130,7 @@
}, },
{ {
"BriefDescription": "L2 cache misses when fetching instructions", "BriefDescription": "L2 cache misses when fetching instructions",
"Counter": "0,1,2,3",
"EventCode": "0x24", "EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_MISS", "EventName": "L2_RQSTS.CODE_RD_MISS",
"PublicDescription": "Counts L2 cache misses when fetching instructions.", "PublicDescription": "Counts L2 cache misses when fetching instructions.",
...@@ -124,6 +139,7 @@ ...@@ -124,6 +139,7 @@
}, },
{ {
"BriefDescription": "Demand Data Read requests that hit L2 cache", "BriefDescription": "Demand Data Read requests that hit L2 cache",
"Counter": "0,1,2,3",
"EventCode": "0x24", "EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT", "EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT",
"PublicDescription": "Counts the number of demand Data Read requests initiated by load instructions that hit L2 cache.", "PublicDescription": "Counts the number of demand Data Read requests initiated by load instructions that hit L2 cache.",
...@@ -132,6 +148,7 @@ ...@@ -132,6 +148,7 @@
}, },
{ {
"BriefDescription": "Demand Data Read miss L2, no rejects", "BriefDescription": "Demand Data Read miss L2, no rejects",
"Counter": "0,1,2,3",
"EventCode": "0x24", "EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS",
"PublicDescription": "Counts the number of demand Data Read requests that miss L2 cache. Only not rejected loads are counted.", "PublicDescription": "Counts the number of demand Data Read requests that miss L2 cache. Only not rejected loads are counted.",
...@@ -140,6 +157,7 @@ ...@@ -140,6 +157,7 @@
}, },
{ {
"BriefDescription": "RFO requests that hit L2 cache", "BriefDescription": "RFO requests that hit L2 cache",
"Counter": "0,1,2,3",
"EventCode": "0x24", "EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_HIT", "EventName": "L2_RQSTS.RFO_HIT",
"PublicDescription": "Counts the RFO (Read-for-Ownership) requests that hit L2 cache.", "PublicDescription": "Counts the RFO (Read-for-Ownership) requests that hit L2 cache.",
...@@ -148,6 +166,7 @@ ...@@ -148,6 +166,7 @@
}, },
{ {
"BriefDescription": "RFO requests that miss L2 cache", "BriefDescription": "RFO requests that miss L2 cache",
"Counter": "0,1,2,3",
"EventCode": "0x24", "EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_MISS", "EventName": "L2_RQSTS.RFO_MISS",
"PublicDescription": "Counts the RFO (Read-for-Ownership) requests that miss L2 cache.", "PublicDescription": "Counts the RFO (Read-for-Ownership) requests that miss L2 cache.",
...@@ -156,6 +175,7 @@ ...@@ -156,6 +175,7 @@
}, },
{ {
"BriefDescription": "SW prefetch requests that hit L2 cache.", "BriefDescription": "SW prefetch requests that hit L2 cache.",
"Counter": "0,1,2,3",
"EventCode": "0x24", "EventCode": "0x24",
"EventName": "L2_RQSTS.SWPF_HIT", "EventName": "L2_RQSTS.SWPF_HIT",
"PublicDescription": "Counts Software prefetch requests that hit the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when FB is not full.", "PublicDescription": "Counts Software prefetch requests that hit the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when FB is not full.",
...@@ -164,6 +184,7 @@ ...@@ -164,6 +184,7 @@
}, },
{ {
"BriefDescription": "SW prefetch requests that miss L2 cache.", "BriefDescription": "SW prefetch requests that miss L2 cache.",
"Counter": "0,1,2,3",
"EventCode": "0x24", "EventCode": "0x24",
"EventName": "L2_RQSTS.SWPF_MISS", "EventName": "L2_RQSTS.SWPF_MISS",
"PublicDescription": "Counts Software prefetch requests that miss the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when FB is not full.", "PublicDescription": "Counts Software prefetch requests that miss the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when FB is not full.",
...@@ -172,6 +193,7 @@ ...@@ -172,6 +193,7 @@
}, },
{ {
"BriefDescription": "L2 writebacks that access L2 cache", "BriefDescription": "L2 writebacks that access L2 cache",
"Counter": "0,1,2,3",
"EventCode": "0xF0", "EventCode": "0xF0",
"EventName": "L2_TRANS.L2_WB", "EventName": "L2_TRANS.L2_WB",
"PublicDescription": "Counts L2 writebacks that access L2 cache.", "PublicDescription": "Counts L2 writebacks that access L2 cache.",
...@@ -180,6 +202,7 @@ ...@@ -180,6 +202,7 @@
}, },
{ {
"BriefDescription": "Core-originated cacheable requests that missed L3 (Except hardware prefetches to the L3)", "BriefDescription": "Core-originated cacheable requests that missed L3 (Except hardware prefetches to the L3)",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x2e", "EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.MISS", "EventName": "LONGEST_LAT_CACHE.MISS",
"PublicDescription": "Counts core-originated cacheable requests that miss the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.", "PublicDescription": "Counts core-originated cacheable requests that miss the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
...@@ -188,6 +211,7 @@ ...@@ -188,6 +211,7 @@
}, },
{ {
"BriefDescription": "Core-originated cacheable requests that refer to L3 (Except hardware prefetches to the L3)", "BriefDescription": "Core-originated cacheable requests that refer to L3 (Except hardware prefetches to the L3)",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x2e", "EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.REFERENCE", "EventName": "LONGEST_LAT_CACHE.REFERENCE",
"PublicDescription": "Counts core-originated cacheable requests to the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.", "PublicDescription": "Counts core-originated cacheable requests to the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
...@@ -196,6 +220,7 @@ ...@@ -196,6 +220,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions.", "BriefDescription": "Retired load instructions.",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd0", "EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ALL_LOADS", "EventName": "MEM_INST_RETIRED.ALL_LOADS",
...@@ -206,6 +231,7 @@ ...@@ -206,6 +231,7 @@
}, },
{ {
"BriefDescription": "Retired store instructions.", "BriefDescription": "Retired store instructions.",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd0", "EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ALL_STORES", "EventName": "MEM_INST_RETIRED.ALL_STORES",
...@@ -216,6 +242,7 @@ ...@@ -216,6 +242,7 @@
}, },
{ {
"BriefDescription": "All retired memory instructions.", "BriefDescription": "All retired memory instructions.",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd0", "EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ANY", "EventName": "MEM_INST_RETIRED.ANY",
...@@ -226,6 +253,7 @@ ...@@ -226,6 +253,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions with locked access.", "BriefDescription": "Retired load instructions with locked access.",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd0", "EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.LOCK_LOADS", "EventName": "MEM_INST_RETIRED.LOCK_LOADS",
...@@ -236,6 +264,7 @@ ...@@ -236,6 +264,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions that split across a cacheline boundary.", "BriefDescription": "Retired load instructions that split across a cacheline boundary.",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd0", "EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.SPLIT_LOADS", "EventName": "MEM_INST_RETIRED.SPLIT_LOADS",
...@@ -246,6 +275,7 @@ ...@@ -246,6 +275,7 @@
}, },
{ {
"BriefDescription": "Retired store instructions that split across a cacheline boundary.", "BriefDescription": "Retired store instructions that split across a cacheline boundary.",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd0", "EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.SPLIT_STORES", "EventName": "MEM_INST_RETIRED.SPLIT_STORES",
...@@ -256,6 +286,7 @@ ...@@ -256,6 +286,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions that miss the STLB.", "BriefDescription": "Retired load instructions that miss the STLB.",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd0", "EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.STLB_MISS_LOADS", "EventName": "MEM_INST_RETIRED.STLB_MISS_LOADS",
...@@ -266,6 +297,7 @@ ...@@ -266,6 +297,7 @@
}, },
{ {
"BriefDescription": "Retired store instructions that miss the STLB.", "BriefDescription": "Retired store instructions that miss the STLB.",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd0", "EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.STLB_MISS_STORES", "EventName": "MEM_INST_RETIRED.STLB_MISS_STORES",
...@@ -276,6 +308,7 @@ ...@@ -276,6 +308,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions whose data sources were HitM responses from shared L3", "BriefDescription": "Retired load instructions whose data sources were HitM responses from shared L3",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd2", "EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD", "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD",
...@@ -286,6 +319,7 @@ ...@@ -286,6 +319,7 @@
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD", "BriefDescription": "This event is deprecated. Refer to new event MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd2", "EventCode": "0xd2",
...@@ -296,6 +330,7 @@ ...@@ -296,6 +330,7 @@
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD", "BriefDescription": "This event is deprecated. Refer to new event MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd2", "EventCode": "0xd2",
...@@ -306,6 +341,7 @@ ...@@ -306,6 +341,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache.", "BriefDescription": "Retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd2", "EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS", "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS",
...@@ -316,6 +352,7 @@ ...@@ -316,6 +352,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions whose data sources were hits in L3 without snoops required", "BriefDescription": "Retired load instructions whose data sources were hits in L3 without snoops required",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd2", "EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NONE", "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NONE",
...@@ -326,6 +363,7 @@ ...@@ -326,6 +363,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache", "BriefDescription": "Retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd2", "EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD", "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD",
...@@ -336,6 +374,7 @@ ...@@ -336,6 +374,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions which data sources missed L3 but serviced from local dram", "BriefDescription": "Retired load instructions which data sources missed L3 but serviced from local dram",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd3", "EventCode": "0xd3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM", "EventName": "MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM",
...@@ -346,6 +385,7 @@ ...@@ -346,6 +385,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions which data sources missed L3 but serviced from remote dram", "BriefDescription": "Retired load instructions which data sources missed L3 but serviced from remote dram",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd3", "EventCode": "0xd3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM", "EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM",
...@@ -355,6 +395,7 @@ ...@@ -355,6 +395,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions whose data sources was forwarded from a remote cache", "BriefDescription": "Retired load instructions whose data sources was forwarded from a remote cache",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd3", "EventCode": "0xd3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD", "EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD",
...@@ -365,6 +406,7 @@ ...@@ -365,6 +406,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions whose data sources was remote HITM", "BriefDescription": "Retired load instructions whose data sources was remote HITM",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd3", "EventCode": "0xd3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM", "EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM",
...@@ -375,6 +417,7 @@ ...@@ -375,6 +417,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions with remote Intel(R) Optane(TM) DC persistent memory as the data source where the data request missed all caches.", "BriefDescription": "Retired load instructions with remote Intel(R) Optane(TM) DC persistent memory as the data source where the data request missed all caches.",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd3", "EventCode": "0xd3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM", "EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM",
...@@ -385,6 +428,7 @@ ...@@ -385,6 +428,7 @@
}, },
{ {
"BriefDescription": "Retired instructions with at least 1 uncacheable load or Bus Lock.", "BriefDescription": "Retired instructions with at least 1 uncacheable load or Bus Lock.",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd4", "EventCode": "0xd4",
"EventName": "MEM_LOAD_MISC_RETIRED.UC", "EventName": "MEM_LOAD_MISC_RETIRED.UC",
...@@ -395,6 +439,7 @@ ...@@ -395,6 +439,7 @@
}, },
{ {
"BriefDescription": "Number of completed demand load requests that missed the L1, but hit the FB(fill buffer), because a preceding miss to the same cacheline initiated the line to be brought into L1, but data is not yet ready in L1.", "BriefDescription": "Number of completed demand load requests that missed the L1, but hit the FB(fill buffer), because a preceding miss to the same cacheline initiated the line to be brought into L1, but data is not yet ready in L1.",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd1", "EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.FB_HIT", "EventName": "MEM_LOAD_RETIRED.FB_HIT",
...@@ -405,6 +450,7 @@ ...@@ -405,6 +450,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions with L1 cache hits as data sources", "BriefDescription": "Retired load instructions with L1 cache hits as data sources",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd1", "EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L1_HIT", "EventName": "MEM_LOAD_RETIRED.L1_HIT",
...@@ -415,6 +461,7 @@ ...@@ -415,6 +461,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions missed L1 cache as data sources", "BriefDescription": "Retired load instructions missed L1 cache as data sources",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd1", "EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L1_MISS", "EventName": "MEM_LOAD_RETIRED.L1_MISS",
...@@ -425,6 +472,7 @@ ...@@ -425,6 +472,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions with L2 cache hits as data sources", "BriefDescription": "Retired load instructions with L2 cache hits as data sources",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd1", "EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L2_HIT", "EventName": "MEM_LOAD_RETIRED.L2_HIT",
...@@ -435,6 +483,7 @@ ...@@ -435,6 +483,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions missed L2 cache as data sources", "BriefDescription": "Retired load instructions missed L2 cache as data sources",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd1", "EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L2_MISS", "EventName": "MEM_LOAD_RETIRED.L2_MISS",
...@@ -445,6 +494,7 @@ ...@@ -445,6 +494,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions with L3 cache hits as data sources", "BriefDescription": "Retired load instructions with L3 cache hits as data sources",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd1", "EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L3_HIT", "EventName": "MEM_LOAD_RETIRED.L3_HIT",
...@@ -455,6 +505,7 @@ ...@@ -455,6 +505,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions missed L3 cache as data sources", "BriefDescription": "Retired load instructions missed L3 cache as data sources",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd1", "EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L3_MISS", "EventName": "MEM_LOAD_RETIRED.L3_MISS",
...@@ -465,6 +516,7 @@ ...@@ -465,6 +516,7 @@
}, },
{ {
"BriefDescription": "Retired load instructions with local Intel(R) Optane(TM) DC persistent memory as the data source where the data request missed all caches.", "BriefDescription": "Retired load instructions with local Intel(R) Optane(TM) DC persistent memory as the data source where the data request missed all caches.",
"Counter": "0,1,2,3",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xd1", "EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.LOCAL_PMM", "EventName": "MEM_LOAD_RETIRED.LOCAL_PMM",
...@@ -475,6 +527,7 @@ ...@@ -475,6 +527,7 @@
}, },
{ {
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit in the L3 or were snooped from another core's caches on the same socket.", "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit in the L3 or were snooped from another core's caches on the same socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT", "EventName": "OCR.DEMAND_CODE_RD.L3_HIT",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -484,6 +537,7 @@ ...@@ -484,6 +537,7 @@
}, },
{ {
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that resulted in a snoop hit a modified line in another core's caches which forwarded the data.", "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that resulted in a snoop hit a modified line in another core's caches which forwarded the data.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HITM", "EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -493,6 +547,7 @@ ...@@ -493,6 +547,7 @@
}, },
{ {
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.", "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.SNC_CACHE.HITM", "EventName": "OCR.DEMAND_CODE_RD.SNC_CACHE.HITM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -502,6 +557,7 @@ ...@@ -502,6 +557,7 @@
}, },
{ {
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.", "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.SNC_CACHE.HIT_WITH_FWD", "EventName": "OCR.DEMAND_CODE_RD.SNC_CACHE.HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -511,6 +567,7 @@ ...@@ -511,6 +567,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that hit in the L3 or were snooped from another core's caches on the same socket.", "BriefDescription": "Counts demand data reads that hit in the L3 or were snooped from another core's caches on the same socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT", "EventName": "OCR.DEMAND_DATA_RD.L3_HIT",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -520,6 +577,7 @@ ...@@ -520,6 +577,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that resulted in a snoop hit a modified line in another core's caches which forwarded the data.", "BriefDescription": "Counts demand data reads that resulted in a snoop hit a modified line in another core's caches which forwarded the data.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM", "EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -529,6 +587,7 @@ ...@@ -529,6 +587,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that resulted in a snoop that hit in another core, which did not forward the data.", "BriefDescription": "Counts demand data reads that resulted in a snoop that hit in another core, which did not forward the data.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD", "EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -538,6 +597,7 @@ ...@@ -538,6 +597,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that resulted in a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.", "BriefDescription": "Counts demand data reads that resulted in a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD", "EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -547,6 +607,7 @@ ...@@ -547,6 +607,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the data.", "BriefDescription": "Counts demand data reads that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the data.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.REMOTE_CACHE.SNOOP_HITM", "EventName": "OCR.DEMAND_DATA_RD.REMOTE_CACHE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -556,6 +617,7 @@ ...@@ -556,6 +617,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.", "BriefDescription": "Counts demand data reads that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.REMOTE_CACHE.SNOOP_HIT_WITH_FWD", "EventName": "OCR.DEMAND_DATA_RD.REMOTE_CACHE.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -565,6 +627,7 @@ ...@@ -565,6 +627,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.", "BriefDescription": "Counts demand data reads that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.SNC_CACHE.HITM", "EventName": "OCR.DEMAND_DATA_RD.SNC_CACHE.HITM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -574,6 +637,7 @@ ...@@ -574,6 +637,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.", "BriefDescription": "Counts demand data reads that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.SNC_CACHE.HIT_WITH_FWD", "EventName": "OCR.DEMAND_DATA_RD.SNC_CACHE.HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -583,6 +647,7 @@ ...@@ -583,6 +647,7 @@
}, },
{ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit in the L3 or were snooped from another core's caches on the same socket.", "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit in the L3 or were snooped from another core's caches on the same socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT", "EventName": "OCR.DEMAND_RFO.L3_HIT",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -592,6 +657,7 @@ ...@@ -592,6 +657,7 @@
}, },
{ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that resulted in a snoop hit a modified line in another core's caches which forwarded the data.", "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that resulted in a snoop hit a modified line in another core's caches which forwarded the data.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM", "EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -601,6 +667,7 @@ ...@@ -601,6 +667,7 @@
}, },
{ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.", "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.SNC_CACHE.HITM", "EventName": "OCR.DEMAND_RFO.SNC_CACHE.HITM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -610,6 +677,7 @@ ...@@ -610,6 +677,7 @@
}, },
{ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.", "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.SNC_CACHE.HIT_WITH_FWD", "EventName": "OCR.DEMAND_RFO.SNC_CACHE.HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -619,6 +687,7 @@ ...@@ -619,6 +687,7 @@
}, },
{ {
"BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that hit in the L3 or were snooped from another core's caches on the same socket.", "BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that hit in the L3 or were snooped from another core's caches on the same socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L1D_AND_SWPF.L3_HIT", "EventName": "OCR.HWPF_L1D_AND_SWPF.L3_HIT",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -628,6 +697,7 @@ ...@@ -628,6 +697,7 @@
}, },
{ {
"BriefDescription": "Counts hardware prefetches to the L3 only that hit in the L3 or were snooped from another core's caches on the same socket.", "BriefDescription": "Counts hardware prefetches to the L3 only that hit in the L3 or were snooped from another core's caches on the same socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L3.L3_HIT", "EventName": "OCR.HWPF_L3.L3_HIT",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -637,6 +707,7 @@ ...@@ -637,6 +707,7 @@
}, },
{ {
"BriefDescription": "Counts hardware and software prefetches to all cache levels that hit in the L3 or were snooped from another core's caches on the same socket.", "BriefDescription": "Counts hardware and software prefetches to all cache levels that hit in the L3 or were snooped from another core's caches on the same socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.PREFETCHES.L3_HIT", "EventName": "OCR.PREFETCHES.L3_HIT",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -646,6 +717,7 @@ ...@@ -646,6 +717,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit in the L3 or were snooped from another core's caches on the same socket.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit in the L3 or were snooped from another core's caches on the same socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.L3_HIT", "EventName": "OCR.READS_TO_CORE.L3_HIT",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -655,6 +727,7 @@ ...@@ -655,6 +727,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop hit a modified line in another core's caches which forwarded the data.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop hit a modified line in another core's caches which forwarded the data.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HITM", "EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -664,6 +737,7 @@ ...@@ -664,6 +737,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop that hit in another core, which did not forward the data.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop that hit in another core, which did not forward the data.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HIT_NO_FWD", "EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -673,6 +747,7 @@ ...@@ -673,6 +747,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HIT_WITH_FWD", "EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -682,6 +757,7 @@ ...@@ -682,6 +757,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop was sent and data was returned (Modified or Not Modified).", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop was sent and data was returned (Modified or Not Modified).",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_FWD", "EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_FWD",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -691,6 +767,7 @@ ...@@ -691,6 +767,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the data.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the data.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_HITM", "EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -700,6 +777,7 @@ ...@@ -700,6 +777,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_HIT_WITH_FWD", "EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -709,6 +787,7 @@ ...@@ -709,6 +787,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.SNC_CACHE.HITM", "EventName": "OCR.READS_TO_CORE.SNC_CACHE.HITM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -718,6 +797,7 @@ ...@@ -718,6 +797,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.SNC_CACHE.HIT_WITH_FWD", "EventName": "OCR.READS_TO_CORE.SNC_CACHE.HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -727,6 +807,7 @@ ...@@ -727,6 +807,7 @@
}, },
{ {
"BriefDescription": "Counts streaming stores that hit in the L3 or were snooped from another core's caches on the same socket.", "BriefDescription": "Counts streaming stores that hit in the L3 or were snooped from another core's caches on the same socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.STREAMING_WR.L3_HIT", "EventName": "OCR.STREAMING_WR.L3_HIT",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -736,6 +817,7 @@ ...@@ -736,6 +817,7 @@
}, },
{ {
"BriefDescription": "Demand and prefetch data reads", "BriefDescription": "Demand and prefetch data reads",
"Counter": "0,1,2,3",
"EventCode": "0xB0", "EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.ALL_DATA_RD", "EventName": "OFFCORE_REQUESTS.ALL_DATA_RD",
"PublicDescription": "Counts the demand and prefetch data reads. All Core Data Reads include cacheable 'Demands' and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request type.", "PublicDescription": "Counts the demand and prefetch data reads. All Core Data Reads include cacheable 'Demands' and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request type.",
...@@ -744,6 +826,7 @@ ...@@ -744,6 +826,7 @@
}, },
{ {
"BriefDescription": "Counts memory transactions sent to the uncore.", "BriefDescription": "Counts memory transactions sent to the uncore.",
"Counter": "0,1,2,3",
"EventCode": "0xB0", "EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.ALL_REQUESTS", "EventName": "OFFCORE_REQUESTS.ALL_REQUESTS",
"PublicDescription": "Counts memory transactions sent to the uncore including requests initiated by the core, all L3 prefetches, reads resulting from page walks, and snoop responses.", "PublicDescription": "Counts memory transactions sent to the uncore including requests initiated by the core, all L3 prefetches, reads resulting from page walks, and snoop responses.",
...@@ -752,6 +835,7 @@ ...@@ -752,6 +835,7 @@
}, },
{ {
"BriefDescription": "Counts cacheable and non-cacheable code reads to the core.", "BriefDescription": "Counts cacheable and non-cacheable code reads to the core.",
"Counter": "0,1,2,3",
"EventCode": "0xb0", "EventCode": "0xb0",
"EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD", "EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD",
"PublicDescription": "Counts both cacheable and non-cacheable code reads to the core.", "PublicDescription": "Counts both cacheable and non-cacheable code reads to the core.",
...@@ -760,6 +844,7 @@ ...@@ -760,6 +844,7 @@
}, },
{ {
"BriefDescription": "Demand Data Read requests sent to uncore", "BriefDescription": "Demand Data Read requests sent to uncore",
"Counter": "0,1,2,3",
"EventCode": "0xb0", "EventCode": "0xb0",
"EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD", "EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD",
"PublicDescription": "Counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncore.", "PublicDescription": "Counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncore.",
...@@ -768,6 +853,7 @@ ...@@ -768,6 +853,7 @@
}, },
{ {
"BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM", "BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM",
"Counter": "0,1,2,3",
"EventCode": "0xb0", "EventCode": "0xb0",
"EventName": "OFFCORE_REQUESTS.DEMAND_RFO", "EventName": "OFFCORE_REQUESTS.DEMAND_RFO",
"PublicDescription": "Counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoM.", "PublicDescription": "Counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoM.",
...@@ -776,6 +862,7 @@ ...@@ -776,6 +862,7 @@
}, },
{ {
"BriefDescription": "For every cycle, increments by the number of outstanding data read requests pending.", "BriefDescription": "For every cycle, increments by the number of outstanding data read requests pending.",
"Counter": "0,1,2,3",
"EventCode": "0x60", "EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD",
"PublicDescription": "For every cycle, increments by the number of outstanding data read requests pending. Data read requests include cacheable demand reads and L2 prefetches, but do not include RFOs, code reads or prefetches to the L3. Reads due to page walks resulting from any request type will also be counted. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.", "PublicDescription": "For every cycle, increments by the number of outstanding data read requests pending. Data read requests include cacheable demand reads and L2 prefetches, but do not include RFOs, code reads or prefetches to the L3. Reads due to page walks resulting from any request type will also be counted. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.",
...@@ -784,6 +871,7 @@ ...@@ -784,6 +871,7 @@
}, },
{ {
"BriefDescription": "Cycles where at least 1 outstanding data read request is pending.", "BriefDescription": "Cycles where at least 1 outstanding data read request is pending.",
"Counter": "0,1,2,3",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0x60", "EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
...@@ -793,6 +881,7 @@ ...@@ -793,6 +881,7 @@
}, },
{ {
"BriefDescription": "Cycles with outstanding code read requests pending.", "BriefDescription": "Cycles with outstanding code read requests pending.",
"Counter": "0,1,2,3",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0x60", "EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD",
...@@ -802,6 +891,7 @@ ...@@ -802,6 +891,7 @@
}, },
{ {
"BriefDescription": "Cycles where at least 1 outstanding Demand RFO request is pending.", "BriefDescription": "Cycles where at least 1 outstanding Demand RFO request is pending.",
"Counter": "0,1,2,3",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0x60", "EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO",
...@@ -811,6 +901,7 @@ ...@@ -811,6 +901,7 @@
}, },
{ {
"BriefDescription": "For every cycle, increments by the number of outstanding code read requests pending.", "BriefDescription": "For every cycle, increments by the number of outstanding code read requests pending.",
"Counter": "0,1,2,3",
"EventCode": "0x60", "EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD",
"PublicDescription": "For every cycle, increments by the number of outstanding code read requests pending. Code Read requests include both cacheable and non-cacheable Code Reads. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.", "PublicDescription": "For every cycle, increments by the number of outstanding code read requests pending. Code Read requests include both cacheable and non-cacheable Code Reads. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.",
...@@ -819,6 +910,7 @@ ...@@ -819,6 +910,7 @@
}, },
{ {
"BriefDescription": "For every cycle, increments by the number of outstanding demand data read requests pending.", "BriefDescription": "For every cycle, increments by the number of outstanding demand data read requests pending.",
"Counter": "0,1,2,3",
"EventCode": "0x60", "EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD",
"PublicDescription": "For every cycle, increments by the number of outstanding demand data read requests pending. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.", "PublicDescription": "For every cycle, increments by the number of outstanding demand data read requests pending. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.",
...@@ -827,6 +919,7 @@ ...@@ -827,6 +919,7 @@
}, },
{ {
"BriefDescription": "Counts bus locks, accounts for cache line split locks and UC locks.", "BriefDescription": "Counts bus locks, accounts for cache line split locks and UC locks.",
"Counter": "0,1,2,3",
"EventCode": "0xF4", "EventCode": "0xF4",
"EventName": "SQ_MISC.BUS_LOCK", "EventName": "SQ_MISC.BUS_LOCK",
"PublicDescription": "Counts the more expensive bus lock needed to enforce cache coherency for certain memory accesses that need to be done atomically. Can be created by issuing an atomic instruction (via the LOCK prefix) which causes a cache line split or accesses uncacheable memory.", "PublicDescription": "Counts the more expensive bus lock needed to enforce cache coherency for certain memory accesses that need to be done atomically. Can be created by issuing an atomic instruction (via the LOCK prefix) which causes a cache line split or accesses uncacheable memory.",
...@@ -835,14 +928,24 @@ ...@@ -835,14 +928,24 @@
}, },
{ {
"BriefDescription": "Cycles the queue waiting for offcore responses is full.", "BriefDescription": "Cycles the queue waiting for offcore responses is full.",
"Counter": "0,1,2,3",
"EventCode": "0xf4", "EventCode": "0xf4",
"EventName": "SQ_MISC.SQ_FULL", "EventName": "SQ_MISC.SQ_FULL",
"PublicDescription": "Counts the cycles for which the thread is active and the queue waiting for responses from the uncore cannot take any more entries.", "PublicDescription": "Counts the cycles for which the thread is active and the queue waiting for responses from the uncore cannot take any more entries.",
"SampleAfterValue": "100003", "SampleAfterValue": "100003",
"UMask": "0x4" "UMask": "0x4"
}, },
{
"BriefDescription": "Counts the number of PREFETCHNTA, PREFETCHW, PREFETCHT0, PREFETCHT1 or PREFETCHT2 instructions executed.",
"Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.ANY",
"SampleAfterValue": "100003",
"UMask": "0xf"
},
{ {
"BriefDescription": "Number of PREFETCHNTA instructions executed.", "BriefDescription": "Number of PREFETCHNTA instructions executed.",
"Counter": "0,1,2,3",
"EventCode": "0x32", "EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.NTA", "EventName": "SW_PREFETCH_ACCESS.NTA",
"PublicDescription": "Counts the number of PREFETCHNTA instructions executed.", "PublicDescription": "Counts the number of PREFETCHNTA instructions executed.",
...@@ -851,6 +954,7 @@ ...@@ -851,6 +954,7 @@
}, },
{ {
"BriefDescription": "Number of PREFETCHW instructions executed.", "BriefDescription": "Number of PREFETCHW instructions executed.",
"Counter": "0,1,2,3",
"EventCode": "0x32", "EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.PREFETCHW", "EventName": "SW_PREFETCH_ACCESS.PREFETCHW",
"PublicDescription": "Counts the number of PREFETCHW instructions executed.", "PublicDescription": "Counts the number of PREFETCHW instructions executed.",
...@@ -859,6 +963,7 @@ ...@@ -859,6 +963,7 @@
}, },
{ {
"BriefDescription": "Number of PREFETCHT0 instructions executed.", "BriefDescription": "Number of PREFETCHT0 instructions executed.",
"Counter": "0,1,2,3",
"EventCode": "0x32", "EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.T0", "EventName": "SW_PREFETCH_ACCESS.T0",
"PublicDescription": "Counts the number of PREFETCHT0 instructions executed.", "PublicDescription": "Counts the number of PREFETCHT0 instructions executed.",
...@@ -867,6 +972,7 @@ ...@@ -867,6 +972,7 @@
}, },
{ {
"BriefDescription": "Number of PREFETCHT1 or PREFETCHT2 instructions executed.", "BriefDescription": "Number of PREFETCHT1 or PREFETCHT2 instructions executed.",
"Counter": "0,1,2,3",
"EventCode": "0x32", "EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.T1_T2", "EventName": "SW_PREFETCH_ACCESS.T1_T2",
"PublicDescription": "Counts the number of PREFETCHT1 or PREFETCHT2 instructions executed.", "PublicDescription": "Counts the number of PREFETCHT1 or PREFETCHT2 instructions executed.",
......
[
{
"Unit": "core",
"CountersNumFixed": "4",
"CountersNumGeneric": "8"
},
{
"Unit": "CHA",
"CountersNumFixed": "0",
"CountersNumGeneric": "4"
},
{
"Unit": "IIO",
"CountersNumFixed": "0",
"CountersNumGeneric": "4"
},
{
"Unit": "IRP",
"CountersNumFixed": "0",
"CountersNumGeneric": "2"
},
{
"Unit": "iMC",
"CountersNumFixed": "1",
"CountersNumGeneric": "4"
},
{
"Unit": "M2M",
"CountersNumFixed": "0",
"CountersNumGeneric": "4"
},
{
"Unit": "UPI",
"CountersNumFixed": "0",
"CountersNumGeneric": "4"
},
{
"Unit": "M2PCIe",
"CountersNumFixed": "0",
"CountersNumGeneric": "4"
},
{
"Unit": "M3UPI",
"CountersNumFixed": "0",
"CountersNumGeneric": "4"
},
{
"Unit": "PCU",
"CountersNumFixed": "0",
"CountersNumGeneric": "4"
},
{
"Unit": "UBOX",
"CountersNumFixed": 1,
"CountersNumGeneric": "2"
}
]
\ No newline at end of file
[ [
{ {
"BriefDescription": "Counts all microcode FP assists.", "BriefDescription": "Counts all microcode FP assists.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc1", "EventCode": "0xc1",
"EventName": "ASSISTS.FP", "EventName": "ASSISTS.FP",
"PublicDescription": "Counts all microcode Floating Point assists.", "PublicDescription": "Counts all microcode Floating Point assists.",
...@@ -9,6 +10,7 @@ ...@@ -9,6 +10,7 @@
}, },
{ {
"BriefDescription": "Counts number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", "BriefDescription": "Counts number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7", "EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE", "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.", "PublicDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
...@@ -17,6 +19,7 @@ ...@@ -17,6 +19,7 @@
}, },
{ {
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", "BriefDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7", "EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE", "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.", "PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
...@@ -25,6 +28,7 @@ ...@@ -25,6 +28,7 @@
}, },
{ {
"BriefDescription": "Counts number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", "BriefDescription": "Counts number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7", "EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE", "EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.", "PublicDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
...@@ -33,6 +37,7 @@ ...@@ -33,6 +37,7 @@
}, },
{ {
"BriefDescription": "Counts number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", "BriefDescription": "Counts number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7", "EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE", "EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.", "PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
...@@ -41,6 +46,7 @@ ...@@ -41,6 +46,7 @@
}, },
{ {
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single and 256-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and packed double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.", "BriefDescription": "Number of SSE/AVX computational 128-bit packed single and 256-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and packed double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7", "EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.4_FLOPS", "EventName": "FP_ARITH_INST_RETIRED.4_FLOPS",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision and 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.", "PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision and 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
...@@ -49,6 +55,7 @@ ...@@ -49,6 +55,7 @@
}, },
{ {
"BriefDescription": "Counts number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", "BriefDescription": "Counts number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7", "EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE", "EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.", "PublicDescription": "Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
...@@ -57,6 +64,7 @@ ...@@ -57,6 +64,7 @@
}, },
{ {
"BriefDescription": "Counts number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", "BriefDescription": "Counts number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7", "EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE", "EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.", "PublicDescription": "Number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
...@@ -65,6 +73,7 @@ ...@@ -65,6 +73,7 @@
}, },
{ {
"BriefDescription": "Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.", "BriefDescription": "Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7", "EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.8_FLOPS", "EventName": "FP_ARITH_INST_RETIRED.8_FLOPS",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision and double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.", "PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision and double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
...@@ -73,6 +82,7 @@ ...@@ -73,6 +82,7 @@
}, },
{ {
"BriefDescription": "Number of SSE/AVX computational scalar floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* scalar, double and single precision floating-point: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.", "BriefDescription": "Number of SSE/AVX computational scalar floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* scalar, double and single precision floating-point: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7", "EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR", "EventName": "FP_ARITH_INST_RETIRED.SCALAR",
"PublicDescription": "Number of SSE/AVX computational scalar single precision and double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.", "PublicDescription": "Number of SSE/AVX computational scalar single precision and double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
...@@ -81,6 +91,7 @@ ...@@ -81,6 +91,7 @@
}, },
{ {
"BriefDescription": "Counts number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", "BriefDescription": "Counts number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7", "EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE", "EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.", "PublicDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
...@@ -89,6 +100,7 @@ ...@@ -89,6 +100,7 @@
}, },
{ {
"BriefDescription": "Counts number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", "BriefDescription": "Counts number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7", "EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE", "EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
"PublicDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.", "PublicDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
...@@ -97,6 +109,7 @@ ...@@ -97,6 +109,7 @@
}, },
{ {
"BriefDescription": "Number of any Vector retired FP arithmetic instructions", "BriefDescription": "Number of any Vector retired FP arithmetic instructions",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7", "EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.VECTOR", "EventName": "FP_ARITH_INST_RETIRED.VECTOR",
"SampleAfterValue": "1000003", "SampleAfterValue": "1000003",
......
[ [
{ {
"BriefDescription": "Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front end.", "BriefDescription": "Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front end.",
"Counter": "0,1,2,3",
"EventCode": "0xe6", "EventCode": "0xe6",
"EventName": "BACLEARS.ANY", "EventName": "BACLEARS.ANY",
"PublicDescription": "Counts the number of times the front-end is resteered when it finds a branch instruction in a fetch line. This occurs for the first time a branch instruction is fetched or when the branch is not tracked by the BPU (Branch Prediction Unit) anymore.", "PublicDescription": "Counts the number of times the front-end is resteered when it finds a branch instruction in a fetch line. This occurs for the first time a branch instruction is fetched or when the branch is not tracked by the BPU (Branch Prediction Unit) anymore.",
...@@ -9,6 +10,7 @@ ...@@ -9,6 +10,7 @@
}, },
{ {
"BriefDescription": "Stalls caused by changing prefix length of the instruction. [This event is alias to ILD_STALL.LCP]", "BriefDescription": "Stalls caused by changing prefix length of the instruction. [This event is alias to ILD_STALL.LCP]",
"Counter": "0,1,2,3",
"EventCode": "0x87", "EventCode": "0x87",
"EventName": "DECODE.LCP", "EventName": "DECODE.LCP",
"PublicDescription": "Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk. [This event is alias to ILD_STALL.LCP]", "PublicDescription": "Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk. [This event is alias to ILD_STALL.LCP]",
...@@ -17,6 +19,7 @@ ...@@ -17,6 +19,7 @@
}, },
{ {
"BriefDescription": "Decode Stream Buffer (DSB)-to-MITE transitions count.", "BriefDescription": "Decode Stream Buffer (DSB)-to-MITE transitions count.",
"Counter": "0,1,2,3",
"CounterMask": "1", "CounterMask": "1",
"EdgeDetect": "1", "EdgeDetect": "1",
"EventCode": "0xab", "EventCode": "0xab",
...@@ -27,6 +30,7 @@ ...@@ -27,6 +30,7 @@
}, },
{ {
"BriefDescription": "DSB-to-MITE switch true penalty cycles.", "BriefDescription": "DSB-to-MITE switch true penalty cycles.",
"Counter": "0,1,2,3",
"EventCode": "0xab", "EventCode": "0xab",
"EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES", "EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES",
"PublicDescription": "Decode Stream Buffer (DSB) is a Uop-cache that holds translations of previously fetched instructions that were decoded by the legacy x86 decode pipeline (MITE). This event counts fetch penalty cycles when a transition occurs from DSB to MITE.", "PublicDescription": "Decode Stream Buffer (DSB) is a Uop-cache that holds translations of previously fetched instructions that were decoded by the legacy x86 decode pipeline (MITE). This event counts fetch penalty cycles when a transition occurs from DSB to MITE.",
...@@ -35,6 +39,7 @@ ...@@ -35,6 +39,7 @@
}, },
{ {
"BriefDescription": "Retired Instructions who experienced DSB miss.", "BriefDescription": "Retired Instructions who experienced DSB miss.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.ANY_DSB_MISS", "EventName": "FRONTEND_RETIRED.ANY_DSB_MISS",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -46,6 +51,7 @@ ...@@ -46,6 +51,7 @@
}, },
{ {
"BriefDescription": "Retired Instructions who experienced a critical DSB miss.", "BriefDescription": "Retired Instructions who experienced a critical DSB miss.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.DSB_MISS", "EventName": "FRONTEND_RETIRED.DSB_MISS",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -57,6 +63,7 @@ ...@@ -57,6 +63,7 @@
}, },
{ {
"BriefDescription": "Retired Instructions who experienced iTLB true miss.", "BriefDescription": "Retired Instructions who experienced iTLB true miss.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.ITLB_MISS", "EventName": "FRONTEND_RETIRED.ITLB_MISS",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -68,6 +75,7 @@ ...@@ -68,6 +75,7 @@
}, },
{ {
"BriefDescription": "Retired Instructions who experienced Instruction L1 Cache true miss.", "BriefDescription": "Retired Instructions who experienced Instruction L1 Cache true miss.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.L1I_MISS", "EventName": "FRONTEND_RETIRED.L1I_MISS",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -79,6 +87,7 @@ ...@@ -79,6 +87,7 @@
}, },
{ {
"BriefDescription": "Retired Instructions who experienced Instruction L2 Cache true miss.", "BriefDescription": "Retired Instructions who experienced Instruction L2 Cache true miss.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.L2_MISS", "EventName": "FRONTEND_RETIRED.L2_MISS",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -90,6 +99,7 @@ ...@@ -90,6 +99,7 @@
}, },
{ {
"BriefDescription": "Retired instructions after front-end starvation of at least 1 cycle", "BriefDescription": "Retired instructions after front-end starvation of at least 1 cycle",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_1", "EventName": "FRONTEND_RETIRED.LATENCY_GE_1",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -101,6 +111,7 @@ ...@@ -101,6 +111,7 @@
}, },
{ {
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall.", "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_128", "EventName": "FRONTEND_RETIRED.LATENCY_GE_128",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -112,6 +123,7 @@ ...@@ -112,6 +123,7 @@
}, },
{ {
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 16 cycles which was not interrupted by a back-end stall.", "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 16 cycles which was not interrupted by a back-end stall.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_16", "EventName": "FRONTEND_RETIRED.LATENCY_GE_16",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -123,6 +135,7 @@ ...@@ -123,6 +135,7 @@
}, },
{ {
"BriefDescription": "Retired instructions after front-end starvation of at least 2 cycles", "BriefDescription": "Retired instructions after front-end starvation of at least 2 cycles",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_2", "EventName": "FRONTEND_RETIRED.LATENCY_GE_2",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -134,6 +147,7 @@ ...@@ -134,6 +147,7 @@
}, },
{ {
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall.", "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_256", "EventName": "FRONTEND_RETIRED.LATENCY_GE_256",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -145,6 +159,7 @@ ...@@ -145,6 +159,7 @@
}, },
{ {
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end had at least 1 bubble-slot for a period of 2 cycles which was not interrupted by a back-end stall.", "BriefDescription": "Retired instructions that are fetched after an interval where the front-end had at least 1 bubble-slot for a period of 2 cycles which was not interrupted by a back-end stall.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1", "EventName": "FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -156,6 +171,7 @@ ...@@ -156,6 +171,7 @@
}, },
{ {
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 32 cycles which was not interrupted by a back-end stall.", "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 32 cycles which was not interrupted by a back-end stall.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_32", "EventName": "FRONTEND_RETIRED.LATENCY_GE_32",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -167,6 +183,7 @@ ...@@ -167,6 +183,7 @@
}, },
{ {
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall.", "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_4", "EventName": "FRONTEND_RETIRED.LATENCY_GE_4",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -178,6 +195,7 @@ ...@@ -178,6 +195,7 @@
}, },
{ {
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall.", "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_512", "EventName": "FRONTEND_RETIRED.LATENCY_GE_512",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -189,6 +207,7 @@ ...@@ -189,6 +207,7 @@
}, },
{ {
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall.", "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_64", "EventName": "FRONTEND_RETIRED.LATENCY_GE_64",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -200,6 +219,7 @@ ...@@ -200,6 +219,7 @@
}, },
{ {
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 8 cycles which was not interrupted by a back-end stall.", "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 8 cycles which was not interrupted by a back-end stall.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_8", "EventName": "FRONTEND_RETIRED.LATENCY_GE_8",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -211,6 +231,7 @@ ...@@ -211,6 +231,7 @@
}, },
{ {
"BriefDescription": "Retired Instructions who experienced STLB (2nd level TLB) true miss.", "BriefDescription": "Retired Instructions who experienced STLB (2nd level TLB) true miss.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6", "EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.STLB_MISS", "EventName": "FRONTEND_RETIRED.STLB_MISS",
"MSRIndex": "0x3F7", "MSRIndex": "0x3F7",
...@@ -222,6 +243,7 @@ ...@@ -222,6 +243,7 @@
}, },
{ {
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache miss. [This event is alias to ICACHE_DATA.STALLS]", "BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache miss. [This event is alias to ICACHE_DATA.STALLS]",
"Counter": "0,1,2,3",
"EventCode": "0x80", "EventCode": "0x80",
"EventName": "ICACHE_16B.IFDATA_STALL", "EventName": "ICACHE_16B.IFDATA_STALL",
"PublicDescription": "Counts cycles where a code line fetch is stalled due to an L1 instruction cache miss. The legacy decode pipeline works at a 16 Byte granularity. [This event is alias to ICACHE_DATA.STALLS]", "PublicDescription": "Counts cycles where a code line fetch is stalled due to an L1 instruction cache miss. The legacy decode pipeline works at a 16 Byte granularity. [This event is alias to ICACHE_DATA.STALLS]",
...@@ -230,6 +252,7 @@ ...@@ -230,6 +252,7 @@
}, },
{ {
"BriefDescription": "Instruction fetch tag lookups that hit in the instruction cache (L1I). Counts at 64-byte cache-line granularity.", "BriefDescription": "Instruction fetch tag lookups that hit in the instruction cache (L1I). Counts at 64-byte cache-line granularity.",
"Counter": "0,1,2,3",
"EventCode": "0x83", "EventCode": "0x83",
"EventName": "ICACHE_64B.IFTAG_HIT", "EventName": "ICACHE_64B.IFTAG_HIT",
"PublicDescription": "Counts instruction fetch tag lookups that hit in the instruction cache (L1I). Counts at 64-byte cache-line granularity. Accounts for both cacheable and uncacheable accesses.", "PublicDescription": "Counts instruction fetch tag lookups that hit in the instruction cache (L1I). Counts at 64-byte cache-line granularity. Accounts for both cacheable and uncacheable accesses.",
...@@ -238,6 +261,7 @@ ...@@ -238,6 +261,7 @@
}, },
{ {
"BriefDescription": "Instruction fetch tag lookups that miss in the instruction cache (L1I). Counts at 64-byte cache-line granularity.", "BriefDescription": "Instruction fetch tag lookups that miss in the instruction cache (L1I). Counts at 64-byte cache-line granularity.",
"Counter": "0,1,2,3",
"EventCode": "0x83", "EventCode": "0x83",
"EventName": "ICACHE_64B.IFTAG_MISS", "EventName": "ICACHE_64B.IFTAG_MISS",
"PublicDescription": "Counts instruction fetch tag lookups that miss in the instruction cache (L1I). Counts at 64-byte cache-line granularity. Accounts for both cacheable and uncacheable accesses.", "PublicDescription": "Counts instruction fetch tag lookups that miss in the instruction cache (L1I). Counts at 64-byte cache-line granularity. Accounts for both cacheable and uncacheable accesses.",
...@@ -246,6 +270,7 @@ ...@@ -246,6 +270,7 @@
}, },
{ {
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_TAG.STALLS]", "BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_TAG.STALLS]",
"Counter": "0,1,2,3",
"EventCode": "0x83", "EventCode": "0x83",
"EventName": "ICACHE_64B.IFTAG_STALL", "EventName": "ICACHE_64B.IFTAG_STALL",
"PublicDescription": "Counts cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_TAG.STALLS]", "PublicDescription": "Counts cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_TAG.STALLS]",
...@@ -254,6 +279,7 @@ ...@@ -254,6 +279,7 @@
}, },
{ {
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache miss. [This event is alias to ICACHE_16B.IFDATA_STALL]", "BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache miss. [This event is alias to ICACHE_16B.IFDATA_STALL]",
"Counter": "0,1,2,3",
"EventCode": "0x80", "EventCode": "0x80",
"EventName": "ICACHE_DATA.STALLS", "EventName": "ICACHE_DATA.STALLS",
"PublicDescription": "Counts cycles where a code line fetch is stalled due to an L1 instruction cache miss. The legacy decode pipeline works at a 16 Byte granularity. [This event is alias to ICACHE_16B.IFDATA_STALL]", "PublicDescription": "Counts cycles where a code line fetch is stalled due to an L1 instruction cache miss. The legacy decode pipeline works at a 16 Byte granularity. [This event is alias to ICACHE_16B.IFDATA_STALL]",
...@@ -262,6 +288,7 @@ ...@@ -262,6 +288,7 @@
}, },
{ {
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_64B.IFTAG_STALL]", "BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_64B.IFTAG_STALL]",
"Counter": "0,1,2,3",
"EventCode": "0x83", "EventCode": "0x83",
"EventName": "ICACHE_TAG.STALLS", "EventName": "ICACHE_TAG.STALLS",
"PublicDescription": "Counts cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_64B.IFTAG_STALL]", "PublicDescription": "Counts cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_64B.IFTAG_STALL]",
...@@ -270,6 +297,7 @@ ...@@ -270,6 +297,7 @@
}, },
{ {
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop", "BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop",
"Counter": "0,1,2,3",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0x79", "EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES_ANY", "EventName": "IDQ.DSB_CYCLES_ANY",
...@@ -279,6 +307,7 @@ ...@@ -279,6 +307,7 @@
}, },
{ {
"BriefDescription": "Cycles DSB is delivering optimal number of Uops", "BriefDescription": "Cycles DSB is delivering optimal number of Uops",
"Counter": "0,1,2,3",
"CounterMask": "5", "CounterMask": "5",
"EventCode": "0x79", "EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES_OK", "EventName": "IDQ.DSB_CYCLES_OK",
...@@ -288,6 +317,7 @@ ...@@ -288,6 +317,7 @@
}, },
{ {
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path", "BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path",
"Counter": "0,1,2,3",
"EventCode": "0x79", "EventCode": "0x79",
"EventName": "IDQ.DSB_UOPS", "EventName": "IDQ.DSB_UOPS",
"PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path.", "PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path.",
...@@ -296,6 +326,7 @@ ...@@ -296,6 +326,7 @@
}, },
{ {
"BriefDescription": "Cycles MITE is delivering any Uop", "BriefDescription": "Cycles MITE is delivering any Uop",
"Counter": "0,1,2,3",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0x79", "EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES_ANY", "EventName": "IDQ.MITE_CYCLES_ANY",
...@@ -305,6 +336,7 @@ ...@@ -305,6 +336,7 @@
}, },
{ {
"BriefDescription": "Cycles MITE is delivering optimal number of Uops", "BriefDescription": "Cycles MITE is delivering optimal number of Uops",
"Counter": "0,1,2,3",
"CounterMask": "5", "CounterMask": "5",
"EventCode": "0x79", "EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES_OK", "EventName": "IDQ.MITE_CYCLES_OK",
...@@ -314,6 +346,7 @@ ...@@ -314,6 +346,7 @@
}, },
{ {
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path", "BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
"Counter": "0,1,2,3",
"EventCode": "0x79", "EventCode": "0x79",
"EventName": "IDQ.MITE_UOPS", "EventName": "IDQ.MITE_UOPS",
"PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).", "PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).",
...@@ -322,6 +355,7 @@ ...@@ -322,6 +355,7 @@
}, },
{ {
"BriefDescription": "Number of switches from DSB or MITE to the MS", "BriefDescription": "Number of switches from DSB or MITE to the MS",
"Counter": "0,1,2,3",
"CounterMask": "1", "CounterMask": "1",
"EdgeDetect": "1", "EdgeDetect": "1",
"EventCode": "0x79", "EventCode": "0x79",
...@@ -332,6 +366,7 @@ ...@@ -332,6 +366,7 @@
}, },
{ {
"BriefDescription": "Uops delivered to IDQ while MS is busy", "BriefDescription": "Uops delivered to IDQ while MS is busy",
"Counter": "0,1,2,3",
"EventCode": "0x79", "EventCode": "0x79",
"EventName": "IDQ.MS_UOPS", "EventName": "IDQ.MS_UOPS",
"PublicDescription": "Counts the total number of uops delivered by the Microcode Sequencer (MS). Any instruction over 4 uops will be delivered by the MS. Some instructions such as transcendentals may additionally generate uops from the MS.", "PublicDescription": "Counts the total number of uops delivered by the Microcode Sequencer (MS). Any instruction over 4 uops will be delivered by the MS. Some instructions such as transcendentals may additionally generate uops from the MS.",
...@@ -340,6 +375,7 @@ ...@@ -340,6 +375,7 @@
}, },
{ {
"BriefDescription": "Uops not delivered by IDQ when backend of the machine is not stalled", "BriefDescription": "Uops not delivered by IDQ when backend of the machine is not stalled",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x9c", "EventCode": "0x9c",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CORE", "EventName": "IDQ_UOPS_NOT_DELIVERED.CORE",
"PublicDescription": "Counts the number of uops not delivered to by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle.", "PublicDescription": "Counts the number of uops not delivered to by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle.",
...@@ -348,6 +384,7 @@ ...@@ -348,6 +384,7 @@
}, },
{ {
"BriefDescription": "Cycles when no uops are not delivered by the IDQ when backend of the machine is not stalled", "BriefDescription": "Cycles when no uops are not delivered by the IDQ when backend of the machine is not stalled",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "5", "CounterMask": "5",
"EventCode": "0x9c", "EventCode": "0x9c",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE", "EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE",
...@@ -357,6 +394,7 @@ ...@@ -357,6 +394,7 @@
}, },
{ {
"BriefDescription": "Cycles when optimal number of uops was delivered to the back-end when the back-end is not stalled", "BriefDescription": "Cycles when optimal number of uops was delivered to the back-end when the back-end is not stalled",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0x9C", "EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK", "EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK",
......
...@@ -47,7 +47,7 @@ ...@@ -47,7 +47,7 @@
}, },
{ {
"BriefDescription": "Percentage of time spent in the active CPU power state C0", "BriefDescription": "Percentage of time spent in the active CPU power state C0",
"MetricExpr": "tma_info_system_cpu_utilization", "MetricExpr": "tma_info_system_cpus_utilized",
"MetricName": "cpu_utilization", "MetricName": "cpu_utilization",
"ScaleUnit": "100%" "ScaleUnit": "100%"
}, },
...@@ -72,12 +72,36 @@ ...@@ -72,12 +72,36 @@
"PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB.", "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB.",
"ScaleUnit": "1per_instr" "ScaleUnit": "1per_instr"
}, },
{
"BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the local CPU socket.",
"MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_LOCAL * 64 / 1e6 / duration_time",
"MetricName": "io_bandwidth_read_local",
"ScaleUnit": "1MB/s"
},
{
"BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from a remote CPU socket.",
"MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_REMOTE * 64 / 1e6 / duration_time",
"MetricName": "io_bandwidth_read_remote",
"ScaleUnit": "1MB/s"
},
{ {
"BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU.", "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU.",
"MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_HIT_ITOM + UNC_CHA_TOR_INSERTS.IO_MISS_ITOM + UNC_CHA_TOR_INSERTS.IO_HIT_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR) * 64 / 1e6 / duration_time", "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_HIT_ITOM + UNC_CHA_TOR_INSERTS.IO_MISS_ITOM + UNC_CHA_TOR_INSERTS.IO_HIT_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR) * 64 / 1e6 / duration_time",
"MetricName": "io_bandwidth_write", "MetricName": "io_bandwidth_write",
"ScaleUnit": "1MB/s" "ScaleUnit": "1MB/s"
}, },
{
"BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the local CPU socket.",
"MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_ITOM_LOCAL + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_LOCAL) * 64 / 1e6 / duration_time",
"MetricName": "io_bandwidth_write_local",
"ScaleUnit": "1MB/s"
},
{
"BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to a remote CPU socket.",
"MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_ITOM_REMOTE + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_REMOTE) * 64 / 1e6 / duration_time",
"MetricName": "io_bandwidth_write_remote",
"ScaleUnit": "1MB/s"
},
{ {
"BriefDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions", "BriefDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions",
"MetricExpr": "ITLB_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANY", "MetricExpr": "ITLB_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANY",
...@@ -308,7 +332,7 @@ ...@@ -308,7 +332,7 @@
{ {
"BriefDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists", "BriefDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists",
"MetricExpr": "34 * ASSISTS.ANY / tma_info_thread_slots", "MetricExpr": "34 * ASSISTS.ANY / tma_info_thread_slots",
"MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group", "MetricGroup": "BvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_assists", "MetricName": "tma_assists",
"MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)", "MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
"PublicDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: ASSISTS.ANY", "PublicDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: ASSISTS.ANY",
...@@ -318,7 +342,7 @@ ...@@ -318,7 +342,7 @@
"BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend", "BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend",
"DefaultMetricgroupName": "TopdownL1", "DefaultMetricgroupName": "TopdownL1",
"MetricExpr": "topdown\\-be\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 5 * INT_MISC.CLEARS_COUNT / tma_info_thread_slots", "MetricExpr": "topdown\\-be\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 5 * INT_MISC.CLEARS_COUNT / tma_info_thread_slots",
"MetricGroup": "Default;TmaL1;TopdownL1;tma_L1_group", "MetricGroup": "BvOB;Default;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_backend_bound", "MetricName": "tma_backend_bound",
"MetricThreshold": "tma_backend_bound > 0.2", "MetricThreshold": "tma_backend_bound > 0.2",
"MetricgroupNoGroup": "TopdownL1;Default", "MetricgroupNoGroup": "TopdownL1;Default",
...@@ -339,7 +363,7 @@ ...@@ -339,7 +363,7 @@
{ {
"BriefDescription": "This metric represents fraction of slots where the CPU was retiring branch instructions.", "BriefDescription": "This metric represents fraction of slots where the CPU was retiring branch instructions.",
"MetricExpr": "tma_light_operations * BR_INST_RETIRED.ALL_BRANCHES / (tma_retiring * tma_info_thread_slots)", "MetricExpr": "tma_light_operations * BR_INST_RETIRED.ALL_BRANCHES / (tma_retiring * tma_info_thread_slots)",
"MetricGroup": "Branches;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group", "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
"MetricName": "tma_branch_instructions", "MetricName": "tma_branch_instructions",
"MetricThreshold": "tma_branch_instructions > 0.1 & tma_light_operations > 0.6", "MetricThreshold": "tma_branch_instructions > 0.1 & tma_light_operations > 0.6",
"ScaleUnit": "100%" "ScaleUnit": "100%"
...@@ -347,7 +371,7 @@ ...@@ -347,7 +371,7 @@
{ {
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction", "BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * tma_bad_speculation", "MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * tma_bad_speculation",
"MetricGroup": "BadSpec;BrMispredicts;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM", "MetricGroup": "BadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
"MetricName": "tma_branch_mispredicts", "MetricName": "tma_branch_mispredicts",
"MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15", "MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2", "MetricgroupNoGroup": "TopdownL2",
...@@ -385,7 +409,7 @@ ...@@ -385,7 +409,7 @@
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses", "BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses",
"MetricConstraint": "NO_GROUP_EVENTS", "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(44 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) + 43.5 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks", "MetricExpr": "(44 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) + 43.5 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
"MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group", "MetricGroup": "BvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_contested_accesses", "MetricName": "tma_contested_accesses",
"MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache", "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
...@@ -405,7 +429,7 @@ ...@@ -405,7 +429,7 @@
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses", "BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses",
"MetricConstraint": "NO_GROUP_EVENTS", "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "43.5 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (1 - OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks", "MetricExpr": "43.5 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (1 - OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
"MetricGroup": "Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group", "MetricGroup": "BvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_data_sharing", "MetricName": "tma_data_sharing",
"MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache", "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
...@@ -423,7 +447,7 @@ ...@@ -423,7 +447,7 @@
{ {
"BriefDescription": "This metric represents fraction of cycles where the Divider unit was active", "BriefDescription": "This metric represents fraction of cycles where the Divider unit was active",
"MetricExpr": "ARITH.DIVIDER_ACTIVE / tma_info_thread_clks", "MetricExpr": "ARITH.DIVIDER_ACTIVE / tma_info_thread_clks",
"MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group", "MetricGroup": "BvCB;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_divider", "MetricName": "tma_divider",
"MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)", "MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_ACTIVE", "PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_ACTIVE",
...@@ -454,13 +478,13 @@ ...@@ -454,13 +478,13 @@
"MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB", "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB",
"MetricName": "tma_dsb_switches", "MetricName": "tma_dsb_switches",
"MetricThreshold": "tma_dsb_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)", "MetricThreshold": "tma_dsb_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DSB_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp", "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DSB_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"ScaleUnit": "100%" "ScaleUnit": "100%"
}, },
{ {
"BriefDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses", "BriefDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses",
"MetricExpr": "min(7 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clks", "MetricExpr": "min(7 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clks",
"MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group", "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
"MetricName": "tma_dtlb_load", "MetricName": "tma_dtlb_load",
"MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_memory_synchronization", "PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_memory_synchronization",
...@@ -469,7 +493,7 @@ ...@@ -469,7 +493,7 @@
{ {
"BriefDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses", "BriefDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses",
"MetricExpr": "(7 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_STORE_MISSES.WALK_ACTIVE) / tma_info_core_core_clks", "MetricExpr": "(7 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_STORE_MISSES.WALK_ACTIVE) / tma_info_core_core_clks",
"MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group", "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
"MetricName": "tma_dtlb_store", "MetricName": "tma_dtlb_store",
"MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load, tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_memory_synchronization", "PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load, tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_memory_synchronization",
...@@ -478,7 +502,7 @@ ...@@ -478,7 +502,7 @@
{ {
"BriefDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing", "BriefDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing",
"MetricExpr": "48 * tma_info_system_core_frequency * OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM / tma_info_thread_clks", "MetricExpr": "48 * tma_info_system_core_frequency * OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM / tma_info_thread_clks",
"MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group", "MetricGroup": "BvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
"MetricName": "tma_false_sharing", "MetricName": "tma_false_sharing",
"MetricThreshold": "tma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "MetricThreshold": "tma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache", "PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
...@@ -487,7 +511,7 @@ ...@@ -487,7 +511,7 @@
{ {
"BriefDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed", "BriefDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed",
"MetricExpr": "L1D_PEND_MISS.FB_FULL / tma_info_thread_clks", "MetricExpr": "L1D_PEND_MISS.FB_FULL / tma_info_thread_clks",
"MetricGroup": "MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group", "MetricGroup": "BvMS;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
"MetricName": "tma_fb_full", "MetricName": "tma_fb_full",
"MetricThreshold": "tma_fb_full > 0.3", "MetricThreshold": "tma_fb_full > 0.3",
"PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores", "PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
...@@ -500,7 +524,7 @@ ...@@ -500,7 +524,7 @@
"MetricName": "tma_fetch_bandwidth", "MetricName": "tma_fetch_bandwidth",
"MetricThreshold": "tma_fetch_bandwidth > 0.2", "MetricThreshold": "tma_fetch_bandwidth > 0.2",
"MetricgroupNoGroup": "TopdownL2", "MetricgroupNoGroup": "TopdownL2",
"PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS. Related metrics: tma_dsb_switches, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp", "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS. Related metrics: tma_dsb_switches, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"ScaleUnit": "100%" "ScaleUnit": "100%"
}, },
{ {
...@@ -542,7 +566,7 @@ ...@@ -542,7 +566,7 @@
}, },
{ {
"BriefDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired", "BriefDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired",
"MetricExpr": "cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ / (tma_retiring * tma_info_thread_slots)", "MetricExpr": "FP_ARITH_INST_RETIRED.SCALAR / (tma_retiring * tma_info_thread_slots)",
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P", "MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_scalar", "MetricName": "tma_fp_scalar",
"MetricThreshold": "tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)", "MetricThreshold": "tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
...@@ -551,7 +575,7 @@ ...@@ -551,7 +575,7 @@
}, },
{ {
"BriefDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths", "BriefDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths",
"MetricExpr": "cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@ / (tma_retiring * tma_info_thread_slots)", "MetricExpr": "FP_ARITH_INST_RETIRED.VECTOR / (tma_retiring * tma_info_thread_slots)",
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P", "MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_vector", "MetricName": "tma_fp_vector",
"MetricThreshold": "tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)", "MetricThreshold": "tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
...@@ -589,7 +613,7 @@ ...@@ -589,7 +613,7 @@
"BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend", "BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend",
"DefaultMetricgroupName": "TopdownL1", "DefaultMetricgroupName": "TopdownL1",
"MetricExpr": "topdown\\-fe\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.UOP_DROPPING / tma_info_thread_slots", "MetricExpr": "topdown\\-fe\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.UOP_DROPPING / tma_info_thread_slots",
"MetricGroup": "Default;PGO;TmaL1;TopdownL1;tma_L1_group", "MetricGroup": "BvFB;BvIO;Default;PGO;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_frontend_bound", "MetricName": "tma_frontend_bound",
"MetricThreshold": "tma_frontend_bound > 0.15", "MetricThreshold": "tma_frontend_bound > 0.15",
"MetricgroupNoGroup": "TopdownL1;Default", "MetricgroupNoGroup": "TopdownL1;Default",
...@@ -609,7 +633,7 @@ ...@@ -609,7 +633,7 @@
{ {
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses", "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses",
"MetricExpr": "ICACHE_DATA.STALLS / tma_info_thread_clks", "MetricExpr": "ICACHE_DATA.STALLS / tma_info_thread_clks",
"MetricGroup": "BigFootprint;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group", "MetricGroup": "BigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_icache_misses", "MetricName": "tma_icache_misses",
"MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)", "MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RETIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS", "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RETIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS",
...@@ -664,24 +688,6 @@ ...@@ -664,24 +688,6 @@
"MetricGroup": "BrMispredicts", "MetricGroup": "BrMispredicts",
"MetricName": "tma_info_bad_spec_spec_clears_ratio" "MetricName": "tma_info_bad_spec_spec_clears_ratio"
}, },
{
"BriefDescription": "Probability of Core Bound bottleneck hidden by SMT-profiling artifacts",
"MetricExpr": "(100 * (1 - max(0, topdown\\-be\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 5 * INT_MISC.CLEARS_COUNT / slots - (CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES) * (topdown\\-be\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 5 * INT_MISC.CLEARS_COUNT / slots)) / (((cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=0x80@ + max(0, topdown\\-be\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 5 * INT_MISC.CLEARS_COUNT / slots - (CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES) * (topdown\\-be\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 5 * INT_MISC.CLEARS_COUNT / slots)) * RS_EVENTS.EMPTY_CYCLES) / CPU_CLK_UNHALTED.THREAD * (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY) / CPU_CLK_UNHALTED.THREAD * CPU_CLK_UNHALTED.THREAD + (EXE_ACTIVITY.1_PORTS_UTIL + topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) * EXE_ACTIVITY.2_PORTS_UTIL)) / CPU_CLK_UNHALTED.THREAD if ARITH.DIVIDER_ACTIVE < CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY else (EXE_ACTIVITY.1_PORTS_UTIL + topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) * EXE_ACTIVITY.2_PORTS_UTIL) / CPU_CLK_UNHALTED.THREAD) if max(0, topdown\\-be\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 5 * INT_MISC.CLEARS_COUNT / slots - (CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES) * (topdown\\-be\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 5 * INT_MISC.CLEARS_COUNT / slots)) < (((cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=0x80@ + max(0, topdown\\-be\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 5 * INT_MISC.CLEARS_COUNT / slots - (CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES) * (topdown\\-be\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 5 * INT_MISC.CLEARS_COUNT / slots)) * RS_EVENTS.EMPTY_CYCLES) / CPU_CLK_UNHALTED.THREAD * (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY) / CPU_CLK_UNHALTED.THREAD * CPU_CLK_UNHALTED.THREAD + (EXE_ACTIVITY.1_PORTS_UTIL + topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) * EXE_ACTIVITY.2_PORTS_UTIL)) / CPU_CLK_UNHALTED.THREAD if ARITH.DIVIDER_ACTIVE < CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY else (EXE_ACTIVITY.1_PORTS_UTIL + topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) * EXE_ACTIVITY.2_PORTS_UTIL) / CPU_CLK_UNHALTED.THREAD) else 1) if tma_info_system_smt_2t_utilization > 0.5 else 0)",
"MetricGroup": "Cor;SMT",
"MetricName": "tma_info_botlnk_core_bound_likely"
},
{
"BriefDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck.",
"MetricExpr": "100 * (100 * ((5 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE - INT_MISC.UOP_DROPPING) / slots * (DSB2MITE_SWITCHES.PENALTY_CYCLES / CPU_CLK_UNHALTED.THREAD) / (ICACHE_DATA.STALLS / CPU_CLK_UNHALTED.THREAD + ICACHE_TAG.STALLS / CPU_CLK_UNHALTED.THREAD + (INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD + 10 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) + min(3 * IDQ.MS_SWITCHES / CPU_CLK_UNHALTED.THREAD, 1) + DECODE.LCP / CPU_CLK_UNHALTED.THREAD + DSB2MITE_SWITCHES.PENALTY_CYCLES / CPU_CLK_UNHALTED.THREAD) + max(0, topdown\\-fe\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.UOP_DROPPING / slots - (5 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE - INT_MISC.UOP_DROPPING) / slots) * ((IDQ.MITE_CYCLES_ANY - IDQ.MITE_CYCLES_OK) / (CPU_CLK_UNHALTED.DISTRIBUTED if #SMT_on else CPU_CLK_UNHALTED.THREAD) / 2) / ((IDQ.MITE_CYCLES_ANY - IDQ.MITE_CYCLES_OK) / (CPU_CLK_UNHALTED.DISTRIBUTED if #SMT_on else CPU_CLK_UNHALTED.THREAD) / 2 + (IDQ.DSB_CYCLES_ANY - IDQ.DSB_CYCLES_OK) / (CPU_CLK_UNHALTED.DISTRIBUTED if #SMT_on else CPU_CLK_UNHALTED.THREAD) / 2)))",
"MetricGroup": "DSBmiss;Fed",
"MetricName": "tma_info_botlnk_dsb_misses"
},
{
"BriefDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck.",
"MetricExpr": "100 * (100 * ((5 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE - INT_MISC.UOP_DROPPING) / slots * (ICACHE_DATA.STALLS / CPU_CLK_UNHALTED.THREAD) / (ICACHE_DATA.STALLS / CPU_CLK_UNHALTED.THREAD + ICACHE_TAG.STALLS / CPU_CLK_UNHALTED.THREAD + (INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD + 10 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) + min(3 * IDQ.MS_SWITCHES / CPU_CLK_UNHALTED.THREAD, 1) + DECODE.LCP / CPU_CLK_UNHALTED.THREAD + DSB2MITE_SWITCHES.PENALTY_CYCLES / CPU_CLK_UNHALTED.THREAD)))",
"MetricGroup": "Fed;FetchLat;IcMiss",
"MetricName": "tma_info_botlnk_ic_misses"
},
{ {
"BriefDescription": "Probability of Core Bound bottleneck hidden by SMT-profiling artifacts", "BriefDescription": "Probability of Core Bound bottleneck hidden by SMT-profiling artifacts",
"MetricConstraint": "NO_GROUP_EVENTS", "MetricConstraint": "NO_GROUP_EVENTS",
...@@ -690,6 +696,14 @@ ...@@ -690,6 +696,14 @@
"MetricName": "tma_info_botlnk_l0_core_bound_likely", "MetricName": "tma_info_botlnk_l0_core_bound_likely",
"MetricThreshold": "tma_info_botlnk_l0_core_bound_likely > 0.5" "MetricThreshold": "tma_info_botlnk_l0_core_bound_likely > 0.5"
}, },
{
"BriefDescription": "Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck",
"MetricExpr": "100 * (tma_frontend_bound * (tma_fetch_bandwidth / (tma_fetch_bandwidth + tma_fetch_latency)) * (tma_dsb / (tma_dsb + tma_mite)))",
"MetricGroup": "DSB;FetchBW;tma_issueFB",
"MetricName": "tma_info_botlnk_l2_dsb_bandwidth",
"MetricThreshold": "tma_info_botlnk_l2_dsb_bandwidth > 10",
"PublicDescription": "Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp"
},
{ {
"BriefDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck", "BriefDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck",
"MetricConstraint": "NO_GROUP_EVENTS", "MetricConstraint": "NO_GROUP_EVENTS",
...@@ -697,7 +711,7 @@ ...@@ -697,7 +711,7 @@
"MetricGroup": "DSBmiss;Fed;tma_issueFB", "MetricGroup": "DSBmiss;Fed;tma_issueFB",
"MetricName": "tma_info_botlnk_l2_dsb_misses", "MetricName": "tma_info_botlnk_l2_dsb_misses",
"MetricThreshold": "tma_info_botlnk_l2_dsb_misses > 10", "MetricThreshold": "tma_info_botlnk_l2_dsb_misses > 10",
"PublicDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp" "PublicDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp"
}, },
{ {
"BriefDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck", "BriefDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck",
...@@ -708,40 +722,34 @@ ...@@ -708,40 +722,34 @@
"MetricThreshold": "tma_info_botlnk_l2_ic_misses > 5", "MetricThreshold": "tma_info_botlnk_l2_ic_misses > 5",
"PublicDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck. Related metrics: " "PublicDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck. Related metrics: "
}, },
{
"BriefDescription": "Total pipeline cost of \"useful operations\" - the baseline operations not covered by Branching_Overhead nor Irregular_Overhead.",
"MetricExpr": "100 * (tma_retiring - (BR_INST_RETIRED.ALL_BRANCHES + BR_INST_RETIRED.NEAR_CALL) / tma_info_thread_slots - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
"MetricGroup": "Ret",
"MetricName": "tma_info_bottleneck_base_non_br",
"MetricThreshold": "tma_info_bottleneck_base_non_br > 20"
},
{ {
"BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)", "BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)",
"MetricConstraint": "NO_GROUP_EVENTS", "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_icache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)", "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_icache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)",
"MetricGroup": "BigFootprint;Fed;Frontend;IcMiss;MemoryTLB", "MetricGroup": "BigFootprint;BvBC;Fed;Frontend;IcMiss;MemoryTLB",
"MetricName": "tma_info_bottleneck_big_code", "MetricName": "tma_info_bottleneck_big_code",
"MetricThreshold": "tma_info_bottleneck_big_code > 20" "MetricThreshold": "tma_info_bottleneck_big_code > 20"
}, },
{ {
"BriefDescription": "Total pipeline cost of branch related instructions (used for program control-flow including function calls)", "BriefDescription": "Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA",
"MetricExpr": "100 * ((BR_INST_RETIRED.ALL_BRANCHES + BR_INST_RETIRED.NEAR_CALL) / tma_info_thread_slots)", "MetricExpr": "100 * ((BR_INST_RETIRED.ALL_BRANCHES + 2 * BR_INST_RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slots)",
"MetricGroup": "Ret", "MetricGroup": "BvBO;Ret",
"MetricName": "tma_info_bottleneck_branching_overhead", "MetricName": "tma_info_bottleneck_branching_overhead",
"MetricThreshold": "tma_info_bottleneck_branching_overhead > 5" "MetricThreshold": "tma_info_bottleneck_branching_overhead > 5",
"PublicDescription": "Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA. Examples include function calls; loops and alignments. (A lower bound)"
}, },
{ {
"BriefDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks", "BriefDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks",
"MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))", "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))",
"MetricGroup": "Mem;MemoryBW;Offcore;tma_issueBW", "MetricGroup": "BvMB;Mem;MemoryBW;Offcore;tma_issueBW",
"MetricName": "tma_info_bottleneck_cache_memory_bandwidth", "MetricName": "tma_info_bottleneck_cache_memory_bandwidth",
"MetricThreshold": "tma_info_bottleneck_cache_memory_bandwidth > 20", "MetricThreshold": "tma_info_bottleneck_cache_memory_bandwidth > 20",
"PublicDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full" "PublicDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full"
}, },
{ {
"BriefDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks", "BriefDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks",
"MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))", "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_l1_hit_latency / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))",
"MetricGroup": "Mem;MemoryLat;Offcore;tma_issueLat", "MetricGroup": "BvML;Mem;MemoryLat;Offcore;tma_issueLat",
"MetricName": "tma_info_bottleneck_cache_memory_latency", "MetricName": "tma_info_bottleneck_cache_memory_latency",
"MetricThreshold": "tma_info_bottleneck_cache_memory_latency > 20", "MetricThreshold": "tma_info_bottleneck_cache_memory_latency > 20",
"PublicDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_mem_latency" "PublicDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_mem_latency"
...@@ -749,23 +757,23 @@ ...@@ -749,23 +757,23 @@
{ {
"BriefDescription": "Total pipeline cost when the execution is compute-bound - an estimation", "BriefDescription": "Total pipeline cost when the execution is compute-bound - an estimation",
"MetricExpr": "100 * (tma_core_bound * tma_divider / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_core_bound * (tma_ports_utilization / (tma_divider + tma_ports_utilization + tma_serializing_operation)) * (tma_ports_utilized_3m / (tma_ports_utilized_0 + tma_ports_utilized_1 + tma_ports_utilized_2 + tma_ports_utilized_3m)))", "MetricExpr": "100 * (tma_core_bound * tma_divider / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_core_bound * (tma_ports_utilization / (tma_divider + tma_ports_utilization + tma_serializing_operation)) * (tma_ports_utilized_3m / (tma_ports_utilized_0 + tma_ports_utilized_1 + tma_ports_utilized_2 + tma_ports_utilized_3m)))",
"MetricGroup": "Cor;tma_issueComp", "MetricGroup": "BvCB;Cor;tma_issueComp",
"MetricName": "tma_info_bottleneck_compute_bound_est", "MetricName": "tma_info_bottleneck_compute_bound_est",
"MetricThreshold": "tma_info_bottleneck_compute_bound_est > 20", "MetricThreshold": "tma_info_bottleneck_compute_bound_est > 20",
"PublicDescription": "Total pipeline cost when the execution is compute-bound - an estimation. Covers Core Bound when High ILP as well as when long-latency execution units are busy. Related metrics: " "PublicDescription": "Total pipeline cost when the execution is compute-bound - an estimation. Covers Core Bound when High ILP as well as when long-latency execution units are busy. Related metrics: "
}, },
{ {
"BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks", "BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks (when the front-end could not sustain operations delivery to the back-end)",
"MetricConstraint": "NO_GROUP_EVENTS", "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "100 * (tma_frontend_bound - (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * (10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts)) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) - tma_info_bottleneck_big_code", "MetricExpr": "100 * (tma_frontend_bound - (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * (10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts)) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) - tma_info_bottleneck_big_code",
"MetricGroup": "Fed;FetchBW;Frontend", "MetricGroup": "BvFB;Fed;FetchBW;Frontend",
"MetricName": "tma_info_bottleneck_instruction_fetch_bw", "MetricName": "tma_info_bottleneck_instruction_fetch_bw",
"MetricThreshold": "tma_info_bottleneck_instruction_fetch_bw > 20" "MetricThreshold": "tma_info_bottleneck_instruction_fetch_bw > 20"
}, },
{ {
"BriefDescription": "Total pipeline cost of irregular execution (e.g", "BriefDescription": "Total pipeline cost of irregular execution (e.g",
"MetricExpr": "100 * (tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * (10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts)) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts * tma_branch_mispredicts + tma_machine_clears * tma_other_nukes / tma_other_nukes + tma_core_bound * (tma_serializing_operation + tma_core_bound * RS_EVENTS.EMPTY_CYCLES / tma_info_thread_clks * tma_ports_utilized_0) / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)", "MetricExpr": "100 * (tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * (10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts)) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts * tma_branch_mispredicts + tma_machine_clears * tma_other_nukes / tma_other_nukes + tma_core_bound * (tma_serializing_operation + tma_core_bound * RS_EVENTS.EMPTY_CYCLES / tma_info_thread_clks * tma_ports_utilized_0) / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
"MetricGroup": "Bad;Cor;Ret;tma_issueMS", "MetricGroup": "Bad;BvIO;Cor;Ret;tma_issueMS",
"MetricName": "tma_info_bottleneck_irregular_overhead", "MetricName": "tma_info_bottleneck_irregular_overhead",
"MetricThreshold": "tma_info_bottleneck_irregular_overhead > 10", "MetricThreshold": "tma_info_bottleneck_irregular_overhead > 10",
"PublicDescription": "Total pipeline cost of irregular execution (e.g. FP-assists in HPC, Wait time with work imbalance multithreaded workloads, overhead in system services or virtualized environments). Related metrics: tma_microcode_sequencer, tma_ms_switches" "PublicDescription": "Total pipeline cost of irregular execution (e.g. FP-assists in HPC, Wait time with work imbalance multithreaded workloads, overhead in system services or virtualized environments). Related metrics: tma_microcode_sequencer, tma_ms_switches"
...@@ -773,8 +781,8 @@ ...@@ -773,8 +781,8 @@
{ {
"BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)", "BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)",
"MetricConstraint": "NO_GROUP_EVENTS", "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "100 * (tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))", "MetricExpr": "100 * (tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))",
"MetricGroup": "Mem;MemoryTLB;Offcore;tma_issueTLB", "MetricGroup": "BvMT;Mem;MemoryTLB;Offcore;tma_issueTLB",
"MetricName": "tma_info_bottleneck_memory_data_tlbs", "MetricName": "tma_info_bottleneck_memory_data_tlbs",
"MetricThreshold": "tma_info_bottleneck_memory_data_tlbs > 20", "MetricThreshold": "tma_info_bottleneck_memory_data_tlbs > 20",
"PublicDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_synchronization" "PublicDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_synchronization"
...@@ -782,7 +790,7 @@ ...@@ -782,7 +790,7 @@
{ {
"BriefDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors)", "BriefDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors)",
"MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) * tma_remote_cache / (tma_local_mem + tma_remote_cache + tma_remote_mem) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_contested_accesses + tma_data_sharing) / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full) + tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * tma_false_sharing / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores - tma_store_latency)) + tma_machine_clears * (1 - tma_other_nukes / tma_other_nukes))", "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) * tma_remote_cache / (tma_local_mem + tma_remote_cache + tma_remote_mem) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_contested_accesses + tma_data_sharing) / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full) + tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * tma_false_sharing / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores - tma_store_latency)) + tma_machine_clears * (1 - tma_other_nukes / tma_other_nukes))",
"MetricGroup": "Mem;Offcore;tma_issueTLB", "MetricGroup": "BvMS;Mem;Offcore;tma_issueTLB",
"MetricName": "tma_info_bottleneck_memory_synchronization", "MetricName": "tma_info_bottleneck_memory_synchronization",
"MetricThreshold": "tma_info_bottleneck_memory_synchronization > 10", "MetricThreshold": "tma_info_bottleneck_memory_synchronization > 10",
"PublicDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs" "PublicDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs"
...@@ -791,18 +799,25 @@ ...@@ -791,18 +799,25 @@
"BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks", "BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks",
"MetricConstraint": "NO_GROUP_EVENTS", "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "100 * (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * (tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", "MetricExpr": "100 * (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * (tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))",
"MetricGroup": "Bad;BadSpec;BrMispredicts;tma_issueBM", "MetricGroup": "Bad;BadSpec;BrMispredicts;BvMP;tma_issueBM",
"MetricName": "tma_info_bottleneck_mispredictions", "MetricName": "tma_info_bottleneck_mispredictions",
"MetricThreshold": "tma_info_bottleneck_mispredictions > 20", "MetricThreshold": "tma_info_bottleneck_mispredictions > 20",
"PublicDescription": "Total pipeline cost of Branch Misprediction related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers" "PublicDescription": "Total pipeline cost of Branch Misprediction related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers"
}, },
{ {
"BriefDescription": "Total pipeline cost of remaining bottlenecks (apart from those listed in the Info.Bottlenecks metrics class)", "BriefDescription": "Total pipeline cost of remaining bottlenecks in the back-end",
"MetricExpr": "100 - (tma_info_bottleneck_big_code + tma_info_bottleneck_instruction_fetch_bw + tma_info_bottleneck_mispredictions + tma_info_bottleneck_cache_memory_bandwidth + tma_info_bottleneck_cache_memory_latency + tma_info_bottleneck_memory_data_tlbs + tma_info_bottleneck_memory_synchronization + tma_info_bottleneck_compute_bound_est + tma_info_bottleneck_irregular_overhead + tma_info_bottleneck_branching_overhead + tma_info_bottleneck_base_non_br)", "MetricExpr": "100 - (tma_info_bottleneck_big_code + tma_info_bottleneck_instruction_fetch_bw + tma_info_bottleneck_mispredictions + tma_info_bottleneck_cache_memory_bandwidth + tma_info_bottleneck_cache_memory_latency + tma_info_bottleneck_memory_data_tlbs + tma_info_bottleneck_memory_synchronization + tma_info_bottleneck_compute_bound_est + tma_info_bottleneck_irregular_overhead + tma_info_bottleneck_branching_overhead + tma_info_bottleneck_useful_work)",
"MetricGroup": "Cor;Offcore", "MetricGroup": "BvOB;Cor;Offcore",
"MetricName": "tma_info_bottleneck_other_bottlenecks", "MetricName": "tma_info_bottleneck_other_bottlenecks",
"MetricThreshold": "tma_info_bottleneck_other_bottlenecks > 20", "MetricThreshold": "tma_info_bottleneck_other_bottlenecks > 20",
"PublicDescription": "Total pipeline cost of remaining bottlenecks (apart from those listed in the Info.Bottlenecks metrics class). Examples include data-dependencies (Core Bound when Low ILP) and other unlisted memory-related stalls." "PublicDescription": "Total pipeline cost of remaining bottlenecks in the back-end. Examples include data-dependencies (Core Bound when Low ILP) and other unlisted memory-related stalls."
},
{
"BriefDescription": "Total pipeline cost of \"useful operations\" - the portion of Retiring category not covered by Branching_Overhead nor Irregular_Overhead.",
"MetricExpr": "100 * (tma_retiring - (BR_INST_RETIRED.ALL_BRANCHES + 2 * BR_INST_RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slots - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
"MetricGroup": "BvUW;Ret",
"MetricName": "tma_info_bottleneck_useful_work",
"MetricThreshold": "tma_info_bottleneck_useful_work > 20"
}, },
{ {
"BriefDescription": "Fraction of branches that are CALL or RET", "BriefDescription": "Fraction of branches that are CALL or RET",
...@@ -860,7 +875,7 @@ ...@@ -860,7 +875,7 @@
}, },
{ {
"BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)", "BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)",
"MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@) / (2 * tma_info_core_core_clks)", "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED.VECTOR) / (2 * tma_info_core_core_clks)",
"MetricGroup": "Cor;Flops;HPC", "MetricGroup": "Cor;Flops;HPC",
"MetricName": "tma_info_core_fp_arith_utilization", "MetricName": "tma_info_core_fp_arith_utilization",
"PublicDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width). Values > 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less common)." "PublicDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width). Values > 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less common)."
...@@ -877,7 +892,7 @@ ...@@ -877,7 +892,7 @@
"MetricGroup": "DSB;Fed;FetchBW;tma_issueFB", "MetricGroup": "DSB;Fed;FetchBW;tma_issueFB",
"MetricName": "tma_info_frontend_dsb_coverage", "MetricName": "tma_info_frontend_dsb_coverage",
"MetricThreshold": "tma_info_frontend_dsb_coverage < 0.7 & tma_info_thread_ipc / 5 > 0.35", "MetricThreshold": "tma_info_frontend_dsb_coverage < 0.7 & tma_info_thread_ipc / 5 > 0.35",
"PublicDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp" "PublicDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp"
}, },
{ {
"BriefDescription": "Average number of cycles of a switch from the DSB fetch-unit to MITE fetch unit - see DSB_Switches tree node for details.", "BriefDescription": "Average number of cycles of a switch from the DSB fetch-unit to MITE fetch unit - see DSB_Switches tree node for details.",
...@@ -937,7 +952,7 @@ ...@@ -937,7 +952,7 @@
}, },
{ {
"BriefDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)", "BriefDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)",
"MetricExpr": "INST_RETIRED.ANY / (cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@)", "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED.VECTOR)",
"MetricGroup": "Flops;InsType", "MetricGroup": "Flops;InsType",
"MetricName": "tma_info_inst_mix_iparith", "MetricName": "tma_info_inst_mix_iparith",
"MetricThreshold": "tma_info_inst_mix_iparith < 10", "MetricThreshold": "tma_info_inst_mix_iparith < 10",
...@@ -1032,24 +1047,12 @@ ...@@ -1032,24 +1047,12 @@
"MetricThreshold": "tma_info_inst_mix_ipswpf < 100" "MetricThreshold": "tma_info_inst_mix_ipswpf < 100"
}, },
{ {
"BriefDescription": "Instruction per taken branch", "BriefDescription": "Instructions per taken branch",
"MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN", "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB", "MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB",
"MetricName": "tma_info_inst_mix_iptb", "MetricName": "tma_info_inst_mix_iptb",
"MetricThreshold": "tma_info_inst_mix_iptb < 11", "MetricThreshold": "tma_info_inst_mix_iptb < 11",
"PublicDescription": "Instruction per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp" "PublicDescription": "Instructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp"
},
{
"BriefDescription": "\"Bus lock\" per kilo instruction",
"MetricExpr": "tma_info_memory_mix_bus_lock_pki",
"MetricGroup": "Mem",
"MetricName": "tma_info_memory_bus_lock_pki"
},
{
"BriefDescription": "STLB (2nd level TLB) code speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
"MetricExpr": "tma_info_memory_tlb_code_stlb_mpki",
"MetricGroup": "Fed;MemoryTLB",
"MetricName": "tma_info_memory_code_stlb_mpki"
}, },
{ {
"BriefDescription": "Average per-core data fill bandwidth to the L1 data cache [GB / sec]", "BriefDescription": "Average per-core data fill bandwidth to the L1 data cache [GB / sec]",
...@@ -1087,12 +1090,6 @@ ...@@ -1087,12 +1090,6 @@
"MetricGroup": "Mem;MemoryBW", "MetricGroup": "Mem;MemoryBW",
"MetricName": "tma_info_memory_core_l3_cache_fill_bw_2t" "MetricName": "tma_info_memory_core_l3_cache_fill_bw_2t"
}, },
{
"BriefDescription": "Average Parallel L2 cache miss data reads",
"MetricExpr": "tma_info_memory_latency_data_l2_mlp",
"MetricGroup": "Memory_BW;Offcore",
"MetricName": "tma_info_memory_data_l2_mlp"
},
{ {
"BriefDescription": "Fill Buffer (FB) hits per kilo instructions for retired demand loads (L1D misses that merge into ongoing miss-handling entries)", "BriefDescription": "Fill Buffer (FB) hits per kilo instructions for retired demand loads (L1D misses that merge into ongoing miss-handling entries)",
"MetricExpr": "1e3 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY", "MetricExpr": "1e3 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY",
...@@ -1100,17 +1097,11 @@ ...@@ -1100,17 +1097,11 @@
"MetricName": "tma_info_memory_fb_hpki" "MetricName": "tma_info_memory_fb_hpki"
}, },
{ {
"BriefDescription": "", "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
"MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / duration_time", "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / duration_time",
"MetricGroup": "Mem;MemoryBW", "MetricGroup": "Mem;MemoryBW",
"MetricName": "tma_info_memory_l1d_cache_fill_bw" "MetricName": "tma_info_memory_l1d_cache_fill_bw"
}, },
{
"BriefDescription": "Average per-core data fill bandwidth to the L1 data cache [GB / sec]",
"MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / (duration_time * 1e3 / 1e3)",
"MetricGroup": "Mem;MemoryBW",
"MetricName": "tma_info_memory_l1d_cache_fill_bw_2t"
},
{ {
"BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads", "BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY", "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY",
...@@ -1124,29 +1115,11 @@ ...@@ -1124,29 +1115,11 @@
"MetricName": "tma_info_memory_l1mpki_load" "MetricName": "tma_info_memory_l1mpki_load"
}, },
{ {
"BriefDescription": "", "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
"MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / duration_time", "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / duration_time",
"MetricGroup": "Mem;MemoryBW", "MetricGroup": "Mem;MemoryBW",
"MetricName": "tma_info_memory_l2_cache_fill_bw" "MetricName": "tma_info_memory_l2_cache_fill_bw"
}, },
{
"BriefDescription": "Average per-core data fill bandwidth to the L2 cache [GB / sec]",
"MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / (duration_time * 1e3 / 1e3)",
"MetricGroup": "Mem;MemoryBW",
"MetricName": "tma_info_memory_l2_cache_fill_bw_2t"
},
{
"BriefDescription": "Rate of non silent evictions from the L2 cache per Kilo instruction",
"MetricExpr": "1e3 * L2_LINES_OUT.NON_SILENT / INST_RETIRED.ANY",
"MetricGroup": "L2Evicts;Mem;Server",
"MetricName": "tma_info_memory_l2_evictions_nonsilent_pki"
},
{
"BriefDescription": "Rate of silent evictions from the L2 cache per Kilo instruction where the evicted lines are dropped (no writeback to L3 or memory)",
"MetricExpr": "1e3 * L2_LINES_OUT.SILENT / INST_RETIRED.ANY",
"MetricGroup": "L2Evicts;Mem;Server",
"MetricName": "tma_info_memory_l2_evictions_silent_pki"
},
{ {
"BriefDescription": "L2 cache hits per kilo instruction for all demand loads (including speculative)", "BriefDescription": "L2 cache hits per kilo instruction for all demand loads (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.ANY", "MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.ANY",
...@@ -1172,29 +1145,23 @@ ...@@ -1172,29 +1145,23 @@
"MetricName": "tma_info_memory_l2mpki_load" "MetricName": "tma_info_memory_l2mpki_load"
}, },
{ {
"BriefDescription": "", "BriefDescription": "Offcore requests (L2 cache miss) per kilo instruction for demand RFOs",
"MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / duration_time", "MetricExpr": "1e3 * L2_RQSTS.RFO_MISS / INST_RETIRED.ANY",
"MetricGroup": "Mem;MemoryBW;Offcore", "MetricGroup": "CacheMisses;Offcore",
"MetricName": "tma_info_memory_l3_cache_access_bw" "MetricName": "tma_info_memory_l2mpki_rfo"
}, },
{ {
"BriefDescription": "Average per-core data access bandwidth to the L3 cache [GB / sec]", "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
"MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / (duration_time * 1e3 / 1e3)", "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / duration_time",
"MetricGroup": "Mem;MemoryBW;Offcore", "MetricGroup": "Mem;MemoryBW;Offcore",
"MetricName": "tma_info_memory_l3_cache_access_bw_2t" "MetricName": "tma_info_memory_l3_cache_access_bw"
}, },
{ {
"BriefDescription": "", "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
"MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / duration_time", "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / duration_time",
"MetricGroup": "Mem;MemoryBW", "MetricGroup": "Mem;MemoryBW",
"MetricName": "tma_info_memory_l3_cache_fill_bw" "MetricName": "tma_info_memory_l3_cache_fill_bw"
}, },
{
"BriefDescription": "Average per-core data fill bandwidth to the L3 cache [GB / sec]",
"MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / (duration_time * 1e3 / 1e3)",
"MetricGroup": "Mem;MemoryBW",
"MetricName": "tma_info_memory_l3_cache_fill_bw_2t"
},
{ {
"BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads", "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY", "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY",
...@@ -1209,7 +1176,7 @@ ...@@ -1209,7 +1176,7 @@
}, },
{ {
"BriefDescription": "Average Latency for L2 cache miss demand Loads", "BriefDescription": "Average Latency for L2 cache miss demand Loads",
"MetricExpr": "tma_info_memory_load_l2_miss_latency", "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS.DEMAND_DATA_RD",
"MetricGroup": "Memory_Lat;Offcore", "MetricGroup": "Memory_Lat;Offcore",
"MetricName": "tma_info_memory_latency_load_l2_miss_latency" "MetricName": "tma_info_memory_latency_load_l2_miss_latency"
}, },
...@@ -1219,29 +1186,11 @@ ...@@ -1219,29 +1186,11 @@
"MetricGroup": "Memory_BW;Offcore", "MetricGroup": "Memory_BW;Offcore",
"MetricName": "tma_info_memory_latency_load_l2_mlp" "MetricName": "tma_info_memory_latency_load_l2_mlp"
}, },
{
"BriefDescription": "Average Latency for L3 cache miss demand Loads",
"MetricExpr": "tma_info_memory_load_l3_miss_latency",
"MetricGroup": "Memory_Lat;Offcore",
"MetricName": "tma_info_memory_latency_load_l3_miss_latency"
},
{
"BriefDescription": "Average Latency for L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS.DEMAND_DATA_RD",
"MetricGroup": "Memory_Lat;Offcore",
"MetricName": "tma_info_memory_load_l2_miss_latency"
},
{
"BriefDescription": "Average Parallel L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / cpu@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\\,cmask\\=0x1@",
"MetricGroup": "Memory_BW;Offcore",
"MetricName": "tma_info_memory_load_l2_mlp"
},
{ {
"BriefDescription": "Average Latency for L3 cache miss demand Loads", "BriefDescription": "Average Latency for L3 cache miss demand Loads",
"MetricExpr": "cpu@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\\,umask\\=0x10@ / OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD", "MetricExpr": "cpu@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\\,umask\\=0x10@ / OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD",
"MetricGroup": "Memory_Lat;Offcore", "MetricGroup": "Memory_Lat;Offcore",
"MetricName": "tma_info_memory_load_l3_miss_latency" "MetricName": "tma_info_memory_latency_load_l3_miss_latency"
}, },
{ {
"BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)", "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
...@@ -1249,12 +1198,6 @@ ...@@ -1249,12 +1198,6 @@
"MetricGroup": "Mem;MemoryBound;MemoryLat", "MetricGroup": "Mem;MemoryBound;MemoryLat",
"MetricName": "tma_info_memory_load_miss_real_latency" "MetricName": "tma_info_memory_load_miss_real_latency"
}, },
{
"BriefDescription": "STLB (2nd level TLB) data load speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
"MetricExpr": "tma_info_memory_tlb_load_stlb_mpki",
"MetricGroup": "Mem;MemoryTLB",
"MetricName": "tma_info_memory_load_stlb_mpki"
},
{ {
"BriefDescription": "\"Bus lock\" per kilo instruction", "BriefDescription": "\"Bus lock\" per kilo instruction",
"MetricExpr": "1e3 * SQ_MISC.BUS_LOCK / INST_RETIRED.ANY", "MetricExpr": "1e3 * SQ_MISC.BUS_LOCK / INST_RETIRED.ANY",
...@@ -1263,7 +1206,7 @@ ...@@ -1263,7 +1206,7 @@
}, },
{ {
"BriefDescription": "Un-cacheable retired load per kilo instruction", "BriefDescription": "Un-cacheable retired load per kilo instruction",
"MetricExpr": "tma_info_memory_uc_load_pki", "MetricExpr": "1e3 * MEM_LOAD_MISC_RETIRED.UC / INST_RETIRED.ANY",
"MetricGroup": "Mem", "MetricGroup": "Mem",
"MetricName": "tma_info_memory_mix_uc_load_pki" "MetricName": "tma_info_memory_mix_uc_load_pki"
}, },
...@@ -1274,18 +1217,6 @@ ...@@ -1274,18 +1217,6 @@
"MetricName": "tma_info_memory_mlp", "MetricName": "tma_info_memory_mlp",
"PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)" "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
}, },
{
"BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses",
"MetricExpr": "(ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_PENDING + DTLB_STORE_MISSES.WALK_PENDING) / (2 * (CPU_CLK_UNHALTED.DISTRIBUTED if #SMT_on else CPU_CLK_UNHALTED.THREAD))",
"MetricGroup": "Mem;MemoryTLB",
"MetricName": "tma_info_memory_page_walks_utilization"
},
{
"BriefDescription": "STLB (2nd level TLB) data store speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
"MetricExpr": "tma_info_memory_tlb_store_stlb_mpki",
"MetricGroup": "Mem;MemoryTLB",
"MetricName": "tma_info_memory_store_stlb_mpki"
},
{ {
"BriefDescription": "STLB (2nd level TLB) code speculative misses per kilo instruction (misses of any page-size that complete the page walk)", "BriefDescription": "STLB (2nd level TLB) code speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
"MetricExpr": "1e3 * ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY", "MetricExpr": "1e3 * ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
...@@ -1312,17 +1243,23 @@ ...@@ -1312,17 +1243,23 @@
"MetricName": "tma_info_memory_tlb_store_stlb_mpki" "MetricName": "tma_info_memory_tlb_store_stlb_mpki"
}, },
{ {
"BriefDescription": "Un-cacheable retired load per kilo instruction", "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per core",
"MetricExpr": "1e3 * MEM_LOAD_MISC_RETIRED.UC / INST_RETIRED.ANY",
"MetricGroup": "Mem",
"MetricName": "tma_info_memory_uc_load_pki"
},
{
"BriefDescription": "",
"MetricExpr": "UOPS_EXECUTED.THREAD / (UOPS_EXECUTED.CORE_CYCLES_GE_1 / 2 if #SMT_on else cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@)", "MetricExpr": "UOPS_EXECUTED.THREAD / (UOPS_EXECUTED.CORE_CYCLES_GE_1 / 2 if #SMT_on else cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@)",
"MetricGroup": "Cor;Pipeline;PortsUtil;SMT", "MetricGroup": "Cor;Pipeline;PortsUtil;SMT",
"MetricName": "tma_info_pipeline_execute" "MetricName": "tma_info_pipeline_execute"
}, },
{
"BriefDescription": "Average number of uops fetched from DSB per cycle",
"MetricExpr": "IDQ.DSB_UOPS / IDQ.DSB_CYCLES_ANY",
"MetricGroup": "Fed;FetchBW",
"MetricName": "tma_info_pipeline_fetch_dsb"
},
{
"BriefDescription": "Average number of uops fetched from MITE per cycle",
"MetricExpr": "IDQ.MITE_UOPS / IDQ.MITE_CYCLES_ANY",
"MetricGroup": "Fed;FetchBW",
"MetricName": "tma_info_pipeline_fetch_mite"
},
{ {
"BriefDescription": "Instructions per a microcode Assist invocation", "BriefDescription": "Instructions per a microcode Assist invocation",
"MetricExpr": "INST_RETIRED.ANY / ASSISTS.ANY", "MetricExpr": "INST_RETIRED.ANY / ASSISTS.ANY",
...@@ -1345,13 +1282,13 @@ ...@@ -1345,13 +1282,13 @@
}, },
{ {
"BriefDescription": "Average CPU Utilization (percentage)", "BriefDescription": "Average CPU Utilization (percentage)",
"MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / TSC", "MetricExpr": "tma_info_system_cpus_utilized / #num_cpus_online",
"MetricGroup": "HPC;Summary", "MetricGroup": "HPC;Summary",
"MetricName": "tma_info_system_cpu_utilization" "MetricName": "tma_info_system_cpu_utilization"
}, },
{ {
"BriefDescription": "Average number of utilized CPUs", "BriefDescription": "Average number of utilized CPUs",
"MetricExpr": "#num_cpus_online * tma_info_system_cpu_utilization", "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / TSC",
"MetricGroup": "Summary", "MetricGroup": "Summary",
"MetricName": "tma_info_system_cpus_utilized" "MetricName": "tma_info_system_cpus_utilized"
}, },
...@@ -1535,7 +1472,7 @@ ...@@ -1535,7 +1472,7 @@
"MetricThreshold": "tma_info_thread_uoppi > 1.05" "MetricThreshold": "tma_info_thread_uoppi > 1.05"
}, },
{ {
"BriefDescription": "Instruction per taken branch", "BriefDescription": "Uops per taken branch",
"MetricExpr": "tma_retiring * tma_info_thread_slots / BR_INST_RETIRED.NEAR_TAKEN", "MetricExpr": "tma_retiring * tma_info_thread_slots / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW", "MetricGroup": "Branches;Fed;FetchBW",
"MetricName": "tma_info_thread_uptb", "MetricName": "tma_info_thread_uptb",
...@@ -1544,7 +1481,7 @@ ...@@ -1544,7 +1481,7 @@
{ {
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses", "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses",
"MetricExpr": "ICACHE_TAG.STALLS / tma_info_thread_clks", "MetricExpr": "ICACHE_TAG.STALLS / tma_info_thread_clks",
"MetricGroup": "BigFootprint;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group", "MetricGroup": "BigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_itlb_misses", "MetricName": "tma_itlb_misses",
"MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)", "MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTEND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS", "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTEND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS",
...@@ -1559,11 +1496,20 @@ ...@@ -1559,11 +1496,20 @@
"PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache. The L1 data cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB_HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1", "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache. The L1 data cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB_HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
"ScaleUnit": "100%" "ScaleUnit": "100%"
}, },
{
"BriefDescription": "This metric roughly estimates fraction of cycles with demand load accesses that hit the L1 cache",
"MetricExpr": "min(2 * (MEM_INST_RETIRED.ALL_LOADS - MEM_LOAD_RETIRED.FB_HIT - MEM_LOAD_RETIRED.L1_MISS) * 20 / 100, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clks",
"MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_l1_bound_group",
"MetricName": "tma_l1_hit_latency",
"MetricThreshold": "tma_l1_hit_latency > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates fraction of cycles with demand load accesses that hit the L1 cache. The short latency of the L1 data cache may be exposed in pointer-chasing memory access patterns as an example. Sample with: MEM_LOAD_RETIRED.L1_HIT",
"ScaleUnit": "100%"
},
{ {
"BriefDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads", "BriefDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads",
"MetricConstraint": "NO_GROUP_EVENTS", "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / (MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + L1D_PEND_MISS.FB_FULL_PERIODS) * ((CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks)", "MetricExpr": "MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / (MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + L1D_PEND_MISS.FB_FULL_PERIODS) * ((CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks)",
"MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group", "MetricGroup": "BvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l2_bound", "MetricName": "tma_l2_bound",
"MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)", "MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT_PS", "PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT_PS",
...@@ -1582,7 +1528,7 @@ ...@@ -1582,7 +1528,7 @@
{ {
"BriefDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)", "BriefDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
"MetricExpr": "19 * tma_info_system_core_frequency * (MEM_LOAD_RETIRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2)) / tma_info_thread_clks", "MetricExpr": "19 * tma_info_system_core_frequency * (MEM_LOAD_RETIRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2)) / tma_info_thread_clks",
"MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group", "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
"MetricName": "tma_l3_hit_latency", "MetricName": "tma_l3_hit_latency",
"MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_info_bottleneck_cache_memory_latency, tma_mem_latency", "PublicDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_info_bottleneck_cache_memory_latency, tma_mem_latency",
...@@ -1594,7 +1540,7 @@ ...@@ -1594,7 +1540,7 @@
"MetricGroup": "FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB", "MetricGroup": "FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB",
"MetricName": "tma_lcp", "MetricName": "tma_lcp",
"MetricThreshold": "tma_lcp > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)", "MetricThreshold": "tma_lcp > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb", "PublicDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb",
"ScaleUnit": "100%" "ScaleUnit": "100%"
}, },
{ {
...@@ -1638,7 +1584,7 @@ ...@@ -1638,7 +1584,7 @@
"MetricGroup": "Server;TopdownL5;tma_L5_group;tma_mem_latency_group", "MetricGroup": "Server;TopdownL5;tma_L5_group;tma_mem_latency_group",
"MetricName": "tma_local_mem", "MetricName": "tma_local_mem",
"MetricThreshold": "tma_local_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))", "MetricThreshold": "tma_local_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from local memory. Caching will improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM_PS", "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from local memory. Caching will improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM",
"ScaleUnit": "100%" "ScaleUnit": "100%"
}, },
{ {
...@@ -1648,13 +1594,13 @@ ...@@ -1648,13 +1594,13 @@
"MetricGroup": "Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group", "MetricGroup": "Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
"MetricName": "tma_lock_latency", "MetricName": "tma_lock_latency",
"MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS_PS. Related metrics: tma_store_latency", "PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS. Related metrics: tma_store_latency",
"ScaleUnit": "100%" "ScaleUnit": "100%"
}, },
{ {
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears", "BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears",
"MetricExpr": "max(0, tma_bad_speculation - tma_branch_mispredicts)", "MetricExpr": "max(0, tma_bad_speculation - tma_branch_mispredicts)",
"MetricGroup": "BadSpec;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn", "MetricGroup": "BadSpec;BvMS;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
"MetricName": "tma_machine_clears", "MetricName": "tma_machine_clears",
"MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation > 0.15", "MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2", "MetricgroupNoGroup": "TopdownL2",
...@@ -1664,7 +1610,7 @@ ...@@ -1664,7 +1610,7 @@
{ {
"BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)", "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@) / tma_info_thread_clks", "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@) / tma_info_thread_clks",
"MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW", "MetricGroup": "BvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
"MetricName": "tma_mem_bandwidth", "MetricName": "tma_mem_bandwidth",
"MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_sq_full", "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_sq_full",
...@@ -1673,7 +1619,7 @@ ...@@ -1673,7 +1619,7 @@
{ {
"BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)", "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidth", "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidth",
"MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat", "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
"MetricName": "tma_mem_latency", "MetricName": "tma_mem_latency",
"MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_info_bottleneck_cache_memory_latency, tma_l3_hit_latency", "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_info_bottleneck_cache_memory_latency, tma_l3_hit_latency",
...@@ -1710,7 +1656,7 @@ ...@@ -1710,7 +1656,7 @@
{ {
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage", "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * INT_MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clks", "MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * INT_MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clks",
"MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM", "MetricGroup": "BadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
"MetricName": "tma_mispredicts_resteers", "MetricName": "tma_mispredicts_resteers",
"MetricThreshold": "tma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))", "MetricThreshold": "tma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions", "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions",
...@@ -1754,7 +1700,7 @@ ...@@ -1754,7 +1700,7 @@
{ {
"BriefDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions", "BriefDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions",
"MetricExpr": "tma_light_operations * INST_RETIRED.NOP / (tma_retiring * tma_info_thread_slots)", "MetricExpr": "tma_light_operations * INST_RETIRED.NOP / (tma_retiring * tma_info_thread_slots)",
"MetricGroup": "Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_group", "MetricGroup": "BvBO;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_group",
"MetricName": "tma_nop_instructions", "MetricName": "tma_nop_instructions",
"MetricThreshold": "tma_nop_instructions > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)", "MetricThreshold": "tma_nop_instructions > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)",
"PublicDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions. Compilers often use NOPs for certain address alignments - e.g. start address of a function or loop body. Sample with: INST_RETIRED.NOP", "PublicDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions. Compilers often use NOPs for certain address alignments - e.g. start address of a function or loop body. Sample with: INST_RETIRED.NOP",
...@@ -1773,7 +1719,7 @@ ...@@ -1773,7 +1719,7 @@
{ {
"BriefDescription": "This metric estimates fraction of slots the CPU was stalled due to other cases of misprediction (non-retired x86 branches or other types).", "BriefDescription": "This metric estimates fraction of slots the CPU was stalled due to other cases of misprediction (non-retired x86 branches or other types).",
"MetricExpr": "max(tma_branch_mispredicts * (1 - BR_MISP_RETIRED.ALL_BRANCHES / (INT_MISC.CLEARS_COUNT - MACHINE_CLEARS.COUNT)), 0.0001)", "MetricExpr": "max(tma_branch_mispredicts * (1 - BR_MISP_RETIRED.ALL_BRANCHES / (INT_MISC.CLEARS_COUNT - MACHINE_CLEARS.COUNT)), 0.0001)",
"MetricGroup": "BrMispredicts;TopdownL3;tma_L3_group;tma_branch_mispredicts_group", "MetricGroup": "BrMispredicts;BvIO;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
"MetricName": "tma_other_mispredicts", "MetricName": "tma_other_mispredicts",
"MetricThreshold": "tma_other_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)", "MetricThreshold": "tma_other_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
"ScaleUnit": "100%" "ScaleUnit": "100%"
...@@ -1781,7 +1727,7 @@ ...@@ -1781,7 +1727,7 @@
{ {
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Nukes (Machine Clears) not related to memory ordering.", "BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Nukes (Machine Clears) not related to memory ordering.",
"MetricExpr": "max(tma_machine_clears * (1 - MACHINE_CLEARS.MEMORY_ORDERING / MACHINE_CLEARS.COUNT), 0.0001)", "MetricExpr": "max(tma_machine_clears * (1 - MACHINE_CLEARS.MEMORY_ORDERING / MACHINE_CLEARS.COUNT), 0.0001)",
"MetricGroup": "Machine_Clears;TopdownL3;tma_L3_group;tma_machine_clears_group", "MetricGroup": "BvIO;Machine_Clears;TopdownL3;tma_L3_group;tma_machine_clears_group",
"MetricName": "tma_other_nukes", "MetricName": "tma_other_nukes",
"MetricThreshold": "tma_other_nukes > 0.05 & (tma_machine_clears > 0.1 & tma_bad_speculation > 0.15)", "MetricThreshold": "tma_other_nukes > 0.05 & (tma_machine_clears > 0.1 & tma_bad_speculation > 0.15)",
"ScaleUnit": "100%" "ScaleUnit": "100%"
...@@ -1842,7 +1788,7 @@ ...@@ -1842,7 +1788,7 @@
}, },
{ {
"BriefDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)", "BriefDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
"MetricExpr": "(cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=0x80@ + tma_core_bound * RS_EVENTS.EMPTY_CYCLES) / tma_info_thread_clks * (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY) / tma_info_thread_clks", "MetricExpr": "cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=0x80@ / tma_info_thread_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group", "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_0", "MetricName": "tma_ports_utilized_0",
"MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))", "MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
...@@ -1870,7 +1816,7 @@ ...@@ -1870,7 +1816,7 @@
{ {
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)", "BriefDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
"MetricExpr": "UOPS_EXECUTED.CYCLES_GE_3 / tma_info_thread_clks", "MetricExpr": "UOPS_EXECUTED.CYCLES_GE_3 / tma_info_thread_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group", "MetricGroup": "BvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_3m", "MetricName": "tma_ports_utilized_3m",
"MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))", "MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Sample with: UOPS_EXECUTED.CYCLES_GE_3", "PublicDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Sample with: UOPS_EXECUTED.CYCLES_GE_3",
...@@ -1898,7 +1844,7 @@ ...@@ -1898,7 +1844,7 @@
"BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired", "BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired",
"DefaultMetricgroupName": "TopdownL1", "DefaultMetricgroupName": "TopdownL1",
"MetricExpr": "topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0 * tma_info_thread_slots", "MetricExpr": "topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0 * tma_info_thread_slots",
"MetricGroup": "Default;TmaL1;TopdownL1;tma_L1_group", "MetricGroup": "BvUW;Default;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_retiring", "MetricName": "tma_retiring",
"MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.1", "MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL1;Default", "MetricgroupNoGroup": "TopdownL1;Default",
...@@ -1908,7 +1854,7 @@ ...@@ -1908,7 +1854,7 @@
{ {
"BriefDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations", "BriefDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations",
"MetricExpr": "RESOURCE_STALLS.SCOREBOARD / tma_info_thread_clks", "MetricExpr": "RESOURCE_STALLS.SCOREBOARD / tma_info_thread_clks",
"MetricGroup": "PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group;tma_issueSO", "MetricGroup": "BvIO;PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group;tma_issueSO",
"MetricName": "tma_serializing_operation", "MetricName": "tma_serializing_operation",
"MetricThreshold": "tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)", "MetricThreshold": "tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations. Instructions like CPUID; WRMSR or LFENCE serialize the out-of-order execution which may limit performance. Sample with: RESOURCE_STALLS.SCOREBOARD. Related metrics: tma_ms_switches", "PublicDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations. Instructions like CPUID; WRMSR or LFENCE serialize the out-of-order execution which may limit performance. Sample with: RESOURCE_STALLS.SCOREBOARD. Related metrics: tma_ms_switches",
...@@ -1945,7 +1891,7 @@ ...@@ -1945,7 +1891,7 @@
{ {
"BriefDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)", "BriefDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)",
"MetricExpr": "L1D_PEND_MISS.L2_STALL / tma_info_thread_clks", "MetricExpr": "L1D_PEND_MISS.L2_STALL / tma_info_thread_clks",
"MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group", "MetricGroup": "BvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
"MetricName": "tma_sq_full", "MetricName": "tma_sq_full",
"MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth", "PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth",
...@@ -1973,7 +1919,7 @@ ...@@ -1973,7 +1919,7 @@
{ {
"BriefDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses", "BriefDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses",
"MetricExpr": "(L2_RQSTS.RFO_HIT * 10 * (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) + (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clks", "MetricExpr": "(L2_RQSTS.RFO_HIT * 10 * (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) + (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clks",
"MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group", "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
"MetricName": "tma_store_latency", "MetricName": "tma_store_latency",
"MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))", "MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency", "PublicDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency",
...@@ -2016,7 +1962,7 @@ ...@@ -2016,7 +1962,7 @@
{ {
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears", "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears",
"MetricExpr": "10 * BACLEARS.ANY / tma_info_thread_clks", "MetricExpr": "10 * BACLEARS.ANY / tma_info_thread_clks",
"MetricGroup": "BigFootprint;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group", "MetricGroup": "BigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
"MetricName": "tma_unknown_branches", "MetricName": "tma_unknown_branches",
"MetricThreshold": "tma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))", "MetricThreshold": "tma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: BACLEARS.ANY", "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: BACLEARS.ANY",
......
[ [
{ {
"BriefDescription": "Execution stalls while L3 cache miss demand load is outstanding.", "BriefDescription": "Execution stalls while L3 cache miss demand load is outstanding.",
"Counter": "0,1,2,3",
"CounterMask": "6", "CounterMask": "6",
"EventCode": "0xa3", "EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_L3_MISS", "EventName": "CYCLE_ACTIVITY.STALLS_L3_MISS",
...@@ -9,6 +10,7 @@ ...@@ -9,6 +10,7 @@
}, },
{ {
"BriefDescription": "Number of machine clears due to memory ordering conflicts.", "BriefDescription": "Number of machine clears due to memory ordering conflicts.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc3", "EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING", "EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
"PublicDescription": "Counts the number of Machine Clears detected dye to memory ordering. Memory Ordering Machine Clears may apply when a memory read may not conform to the memory ordering rules of the x86 architecture", "PublicDescription": "Counts the number of Machine Clears detected dye to memory ordering. Memory Ordering Machine Clears may apply when a memory read may not conform to the memory ordering rules of the x86 architecture",
...@@ -17,6 +19,7 @@ ...@@ -17,6 +19,7 @@
}, },
{ {
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles.", "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles.",
"Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xcd", "EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128",
...@@ -29,6 +32,7 @@ ...@@ -29,6 +32,7 @@
}, },
{ {
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles.", "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles.",
"Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xcd", "EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16",
...@@ -41,6 +45,7 @@ ...@@ -41,6 +45,7 @@
}, },
{ {
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles.", "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles.",
"Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xcd", "EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256",
...@@ -53,6 +58,7 @@ ...@@ -53,6 +58,7 @@
}, },
{ {
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles.", "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles.",
"Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xcd", "EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32",
...@@ -65,6 +71,7 @@ ...@@ -65,6 +71,7 @@
}, },
{ {
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles.", "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles.",
"Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xcd", "EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4",
...@@ -77,6 +84,7 @@ ...@@ -77,6 +84,7 @@
}, },
{ {
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles.", "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles.",
"Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xcd", "EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512",
...@@ -89,6 +97,7 @@ ...@@ -89,6 +97,7 @@
}, },
{ {
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles.", "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles.",
"Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xcd", "EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64",
...@@ -101,6 +110,7 @@ ...@@ -101,6 +110,7 @@
}, },
{ {
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles.", "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles.",
"Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1", "Data_LA": "1",
"EventCode": "0xcd", "EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8", "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8",
...@@ -113,6 +123,7 @@ ...@@ -113,6 +123,7 @@
}, },
{ {
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the local socket's L1, L2, or L3 caches.", "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the local socket's L1, L2, or L3 caches.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS", "EventName": "OCR.DEMAND_CODE_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -122,6 +133,7 @@ ...@@ -122,6 +133,7 @@
}, },
{ {
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.", "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_LOCAL", "EventName": "OCR.DEMAND_CODE_RD.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -131,6 +143,7 @@ ...@@ -131,6 +143,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that were not supplied by the local socket's L1, L2, or L3 caches.", "BriefDescription": "Counts demand data reads that were not supplied by the local socket's L1, L2, or L3 caches.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS", "EventName": "OCR.DEMAND_DATA_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -140,6 +153,7 @@ ...@@ -140,6 +153,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.", "BriefDescription": "Counts demand data reads that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_LOCAL", "EventName": "OCR.DEMAND_DATA_RD.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -149,6 +163,7 @@ ...@@ -149,6 +163,7 @@
}, },
{ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches.", "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS", "EventName": "OCR.DEMAND_RFO.L3_MISS",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -158,6 +173,7 @@ ...@@ -158,6 +173,7 @@
}, },
{ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by the local socket.", "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by the local socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_LOCAL", "EventName": "OCR.DEMAND_RFO.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -167,6 +183,7 @@ ...@@ -167,6 +183,7 @@
}, },
{ {
"BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches.", "BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L1D_AND_SWPF.L3_MISS", "EventName": "OCR.HWPF_L1D_AND_SWPF.L3_MISS",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -176,6 +193,7 @@ ...@@ -176,6 +193,7 @@
}, },
{ {
"BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.", "BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L1D_AND_SWPF.L3_MISS_LOCAL", "EventName": "OCR.HWPF_L1D_AND_SWPF.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -185,6 +203,7 @@ ...@@ -185,6 +203,7 @@
}, },
{ {
"BriefDescription": "Counts hardware prefetches to the L3 only that missed the local socket's L1, L2, and L3 caches.", "BriefDescription": "Counts hardware prefetches to the L3 only that missed the local socket's L1, L2, and L3 caches.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L3.L3_MISS", "EventName": "OCR.HWPF_L3.L3_MISS",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -194,6 +213,7 @@ ...@@ -194,6 +213,7 @@
}, },
{ {
"BriefDescription": "Counts hardware prefetches to the L3 only that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.", "BriefDescription": "Counts hardware prefetches to the L3 only that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L3.L3_MISS_LOCAL", "EventName": "OCR.HWPF_L3.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -203,6 +223,7 @@ ...@@ -203,6 +223,7 @@
}, },
{ {
"BriefDescription": "Counts full cacheline writes (ItoM) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.", "BriefDescription": "Counts full cacheline writes (ItoM) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.ITOM.L3_MISS_LOCAL", "EventName": "OCR.ITOM.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -212,6 +233,7 @@ ...@@ -212,6 +233,7 @@
}, },
{ {
"BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that were not supplied by the local socket's L1, L2, or L3 caches.", "BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that were not supplied by the local socket's L1, L2, or L3 caches.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS", "EventName": "OCR.OTHER.L3_MISS",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -221,6 +243,7 @@ ...@@ -221,6 +243,7 @@
}, },
{ {
"BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.", "BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_LOCAL", "EventName": "OCR.OTHER.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -230,6 +253,7 @@ ...@@ -230,6 +253,7 @@
}, },
{ {
"BriefDescription": "Counts hardware and software prefetches to all cache levels that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.", "BriefDescription": "Counts hardware and software prefetches to all cache levels that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.PREFETCHES.L3_MISS_LOCAL", "EventName": "OCR.PREFETCHES.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -239,6 +263,7 @@ ...@@ -239,6 +263,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.L3_MISS", "EventName": "OCR.READS_TO_CORE.L3_MISS",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -248,6 +273,7 @@ ...@@ -248,6 +273,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by the local socket.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by the local socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.L3_MISS_LOCAL", "EventName": "OCR.READS_TO_CORE.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -257,6 +283,7 @@ ...@@ -257,6 +283,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that missed the L3 Cache and were supplied by the local socket (DRAM or PMM), whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts PMM or DRAM accesses that are controlled by the close or distant SNC Cluster.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that missed the L3 Cache and were supplied by the local socket (DRAM or PMM), whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts PMM or DRAM accesses that are controlled by the close or distant SNC Cluster.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.L3_MISS_LOCAL_SOCKET", "EventName": "OCR.READS_TO_CORE.L3_MISS_LOCAL_SOCKET",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -266,6 +293,7 @@ ...@@ -266,6 +293,7 @@
}, },
{ {
"BriefDescription": "Counts streaming stores that missed the local socket's L1, L2, and L3 caches.", "BriefDescription": "Counts streaming stores that missed the local socket's L1, L2, and L3 caches.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.STREAMING_WR.L3_MISS", "EventName": "OCR.STREAMING_WR.L3_MISS",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -275,6 +303,7 @@ ...@@ -275,6 +303,7 @@
}, },
{ {
"BriefDescription": "Counts streaming stores that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.", "BriefDescription": "Counts streaming stores that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.STREAMING_WR.L3_MISS_LOCAL", "EventName": "OCR.STREAMING_WR.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -284,6 +313,7 @@ ...@@ -284,6 +313,7 @@
}, },
{ {
"BriefDescription": "Counts demand data read requests that miss the L3 cache.", "BriefDescription": "Counts demand data read requests that miss the L3 cache.",
"Counter": "0,1,2,3",
"EventCode": "0xb0", "EventCode": "0xb0",
"EventName": "OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD", "EventName": "OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD",
"SampleAfterValue": "100003", "SampleAfterValue": "100003",
...@@ -291,6 +321,7 @@ ...@@ -291,6 +321,7 @@
}, },
{ {
"BriefDescription": "Cycles where at least one demand data read request known to have missed the L3 cache is pending.", "BriefDescription": "Cycles where at least one demand data read request known to have missed the L3 cache is pending.",
"Counter": "0,1,2,3",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0x60", "EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_L3_MISS_DEMAND_DATA_RD", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_L3_MISS_DEMAND_DATA_RD",
...@@ -300,6 +331,7 @@ ...@@ -300,6 +331,7 @@
}, },
{ {
"BriefDescription": "This event is deprecated.", "BriefDescription": "This event is deprecated.",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0x60", "EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD",
...@@ -308,6 +340,7 @@ ...@@ -308,6 +340,7 @@
}, },
{ {
"BriefDescription": "Cycles where the core is waiting on at least 6 outstanding demand data read requests known to have missed the L3 cache.", "BriefDescription": "Cycles where the core is waiting on at least 6 outstanding demand data read requests known to have missed the L3 cache.",
"Counter": "0,1,2,3",
"CounterMask": "6", "CounterMask": "6",
"EventCode": "0x60", "EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD_GE_6", "EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD_GE_6",
...@@ -317,6 +350,7 @@ ...@@ -317,6 +350,7 @@
}, },
{ {
"BriefDescription": "Number of times an RTM execution aborted.", "BriefDescription": "Number of times an RTM execution aborted.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9", "EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED", "EventName": "RTM_RETIRED.ABORTED",
"PEBS": "1", "PEBS": "1",
...@@ -326,6 +360,7 @@ ...@@ -326,6 +360,7 @@
}, },
{ {
"BriefDescription": "Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)", "BriefDescription": "Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9", "EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_EVENTS", "EventName": "RTM_RETIRED.ABORTED_EVENTS",
"PublicDescription": "Counts the number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt).", "PublicDescription": "Counts the number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt).",
...@@ -334,6 +369,7 @@ ...@@ -334,6 +369,7 @@
}, },
{ {
"BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)", "BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9", "EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MEM", "EventName": "RTM_RETIRED.ABORTED_MEM",
"PublicDescription": "Counts the number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts).", "PublicDescription": "Counts the number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts).",
...@@ -342,6 +378,7 @@ ...@@ -342,6 +378,7 @@
}, },
{ {
"BriefDescription": "Number of times an RTM execution aborted due to incompatible memory type", "BriefDescription": "Number of times an RTM execution aborted due to incompatible memory type",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9", "EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MEMTYPE", "EventName": "RTM_RETIRED.ABORTED_MEMTYPE",
"PublicDescription": "Counts the number of times an RTM execution aborted due to incompatible memory type.", "PublicDescription": "Counts the number of times an RTM execution aborted due to incompatible memory type.",
...@@ -350,6 +387,7 @@ ...@@ -350,6 +387,7 @@
}, },
{ {
"BriefDescription": "Number of times an RTM execution aborted due to HLE-unfriendly instructions", "BriefDescription": "Number of times an RTM execution aborted due to HLE-unfriendly instructions",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9", "EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_UNFRIENDLY", "EventName": "RTM_RETIRED.ABORTED_UNFRIENDLY",
"PublicDescription": "Counts the number of times an RTM execution aborted due to HLE-unfriendly instructions.", "PublicDescription": "Counts the number of times an RTM execution aborted due to HLE-unfriendly instructions.",
...@@ -358,6 +396,7 @@ ...@@ -358,6 +396,7 @@
}, },
{ {
"BriefDescription": "Number of times an RTM execution successfully committed", "BriefDescription": "Number of times an RTM execution successfully committed",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9", "EventCode": "0xc9",
"EventName": "RTM_RETIRED.COMMIT", "EventName": "RTM_RETIRED.COMMIT",
"PublicDescription": "Counts the number of times RTM commit succeeded.", "PublicDescription": "Counts the number of times RTM commit succeeded.",
...@@ -366,6 +405,7 @@ ...@@ -366,6 +405,7 @@
}, },
{ {
"BriefDescription": "Number of times an RTM execution started.", "BriefDescription": "Number of times an RTM execution started.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9", "EventCode": "0xc9",
"EventName": "RTM_RETIRED.START", "EventName": "RTM_RETIRED.START",
"PublicDescription": "Counts the number of times we entered an RTM region. Does not count nested transactions.", "PublicDescription": "Counts the number of times we entered an RTM region. Does not count nested transactions.",
...@@ -374,6 +414,7 @@ ...@@ -374,6 +414,7 @@
}, },
{ {
"BriefDescription": "Counts the number of times a class of instructions that may cause a transactional abort was executed inside a transactional region", "BriefDescription": "Counts the number of times a class of instructions that may cause a transactional abort was executed inside a transactional region",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x5d", "EventCode": "0x5d",
"EventName": "TX_EXEC.MISC2", "EventName": "TX_EXEC.MISC2",
"PublicDescription": "Counts Unfriendly TSX abort triggered by a vzeroupper instruction.", "PublicDescription": "Counts Unfriendly TSX abort triggered by a vzeroupper instruction.",
...@@ -382,6 +423,7 @@ ...@@ -382,6 +423,7 @@
}, },
{ {
"BriefDescription": "Number of times an instruction execution caused the transactional nest count supported to be exceeded", "BriefDescription": "Number of times an instruction execution caused the transactional nest count supported to be exceeded",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x5d", "EventCode": "0x5d",
"EventName": "TX_EXEC.MISC3", "EventName": "TX_EXEC.MISC3",
"PublicDescription": "Counts Unfriendly TSX abort triggered by a nest count that is too deep.", "PublicDescription": "Counts Unfriendly TSX abort triggered by a nest count that is too deep.",
...@@ -390,6 +432,7 @@ ...@@ -390,6 +432,7 @@
}, },
{ {
"BriefDescription": "Speculatively counts the number of TSX aborts due to a data capacity limitation for transactional reads", "BriefDescription": "Speculatively counts the number of TSX aborts due to a data capacity limitation for transactional reads",
"Counter": "0,1,2,3",
"EventCode": "0x54", "EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CAPACITY_READ", "EventName": "TX_MEM.ABORT_CAPACITY_READ",
"PublicDescription": "Speculatively counts the number of Transactional Synchronization Extensions (TSX) aborts due to a data capacity limitation for transactional reads", "PublicDescription": "Speculatively counts the number of Transactional Synchronization Extensions (TSX) aborts due to a data capacity limitation for transactional reads",
...@@ -398,6 +441,7 @@ ...@@ -398,6 +441,7 @@
}, },
{ {
"BriefDescription": "Speculatively counts the number of TSX aborts due to a data capacity limitation for transactional writes.", "BriefDescription": "Speculatively counts the number of TSX aborts due to a data capacity limitation for transactional writes.",
"Counter": "0,1,2,3",
"EventCode": "0x54", "EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CAPACITY_WRITE", "EventName": "TX_MEM.ABORT_CAPACITY_WRITE",
"PublicDescription": "Speculatively counts the number of Transactional Synchronization Extensions (TSX) aborts due to a data capacity limitation for transactional writes.", "PublicDescription": "Speculatively counts the number of Transactional Synchronization Extensions (TSX) aborts due to a data capacity limitation for transactional writes.",
...@@ -406,6 +450,7 @@ ...@@ -406,6 +450,7 @@
}, },
{ {
"BriefDescription": "Number of times a transactional abort was signaled due to a data conflict on a transactionally accessed address", "BriefDescription": "Number of times a transactional abort was signaled due to a data conflict on a transactionally accessed address",
"Counter": "0,1,2,3",
"EventCode": "0x54", "EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CONFLICT", "EventName": "TX_MEM.ABORT_CONFLICT",
"PublicDescription": "Counts the number of times a TSX line had a cache conflict.", "PublicDescription": "Counts the number of times a TSX line had a cache conflict.",
......
...@@ -5,7 +5,20 @@ ...@@ -5,7 +5,20 @@
"BigFootprint": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet", "BigFootprint": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet", "BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Branches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet", "Branches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BvBC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BvBO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BvCB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BvFB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BvIO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BvMB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BvML": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BvMP": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BvMS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BvMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BvOB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BvUW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CacheHits": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet", "CacheHits": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CacheMisses": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CodeGen": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet", "CodeGen": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Compute": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet", "Compute": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Cor": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet", "Cor": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
......
[ [
{ {
"BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the Non-AVX turbo schedule.", "BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the Non-AVX turbo schedule.",
"Counter": "0,1,2,3",
"EventCode": "0x28", "EventCode": "0x28",
"EventName": "CORE_POWER.LVL0_TURBO_LICENSE", "EventName": "CORE_POWER.LVL0_TURBO_LICENSE",
"PublicDescription": "Counts Core cycles where the core was running with power-delivery for baseline license level 0. This includes non-AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes.", "PublicDescription": "Counts Core cycles where the core was running with power-delivery for baseline license level 0. This includes non-AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes.",
...@@ -9,6 +10,7 @@ ...@@ -9,6 +10,7 @@
}, },
{ {
"BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the AVX2 turbo schedule.", "BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the AVX2 turbo schedule.",
"Counter": "0,1,2,3",
"EventCode": "0x28", "EventCode": "0x28",
"EventName": "CORE_POWER.LVL1_TURBO_LICENSE", "EventName": "CORE_POWER.LVL1_TURBO_LICENSE",
"PublicDescription": "Counts Core cycles where the core was running with power-delivery for license level 1. This includes high current AVX 256-bit instructions as well as low current AVX 512-bit instructions.", "PublicDescription": "Counts Core cycles where the core was running with power-delivery for license level 1. This includes high current AVX 256-bit instructions as well as low current AVX 512-bit instructions.",
...@@ -17,6 +19,7 @@ ...@@ -17,6 +19,7 @@
}, },
{ {
"BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the AVX512 turbo schedule.", "BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the AVX512 turbo schedule.",
"Counter": "0,1,2,3",
"EventCode": "0x28", "EventCode": "0x28",
"EventName": "CORE_POWER.LVL2_TURBO_LICENSE", "EventName": "CORE_POWER.LVL2_TURBO_LICENSE",
"PublicDescription": "Core cycles where the core was running with power-delivery for license level 2 (introduced in Skylake Server microarchitecture). This includes high current AVX 512-bit instructions.", "PublicDescription": "Core cycles where the core was running with power-delivery for license level 2 (introduced in Skylake Server microarchitecture). This includes high current AVX 512-bit instructions.",
...@@ -25,6 +28,7 @@ ...@@ -25,6 +28,7 @@
}, },
{ {
"BriefDescription": "Hit snoop reply with data, line invalidated.", "BriefDescription": "Hit snoop reply with data, line invalidated.",
"Counter": "0,1,2,3",
"EventCode": "0xef", "EventCode": "0xef",
"EventName": "CORE_SNOOP_RESPONSE.I_FWD_FE", "EventName": "CORE_SNOOP_RESPONSE.I_FWD_FE",
"PublicDescription": "Counts responses to snoops indicating the line will now be (I)nvalidated: removed from this core's cache, after the data is forwarded back to the requestor and indicating the data was found unmodified in the (FE) Forward or Exclusive State in this cores caches cache. A single snoop response from the core counts on all hyperthreads of the core.", "PublicDescription": "Counts responses to snoops indicating the line will now be (I)nvalidated: removed from this core's cache, after the data is forwarded back to the requestor and indicating the data was found unmodified in the (FE) Forward or Exclusive State in this cores caches cache. A single snoop response from the core counts on all hyperthreads of the core.",
...@@ -33,6 +37,7 @@ ...@@ -33,6 +37,7 @@
}, },
{ {
"BriefDescription": "HitM snoop reply with data, line invalidated.", "BriefDescription": "HitM snoop reply with data, line invalidated.",
"Counter": "0,1,2,3",
"EventCode": "0xef", "EventCode": "0xef",
"EventName": "CORE_SNOOP_RESPONSE.I_FWD_M", "EventName": "CORE_SNOOP_RESPONSE.I_FWD_M",
"PublicDescription": "Counts responses to snoops indicating the line will now be (I)nvalidated: removed from this core's caches, after the data is forwarded back to the requestor, and indicating the data was found modified(M) in this cores caches cache (aka HitM response). A single snoop response from the core counts on all hyperthreads of the core.", "PublicDescription": "Counts responses to snoops indicating the line will now be (I)nvalidated: removed from this core's caches, after the data is forwarded back to the requestor, and indicating the data was found modified(M) in this cores caches cache (aka HitM response). A single snoop response from the core counts on all hyperthreads of the core.",
...@@ -41,6 +46,7 @@ ...@@ -41,6 +46,7 @@
}, },
{ {
"BriefDescription": "Hit snoop reply without sending the data, line invalidated.", "BriefDescription": "Hit snoop reply without sending the data, line invalidated.",
"Counter": "0,1,2,3",
"EventCode": "0xef", "EventCode": "0xef",
"EventName": "CORE_SNOOP_RESPONSE.I_HIT_FSE", "EventName": "CORE_SNOOP_RESPONSE.I_HIT_FSE",
"PublicDescription": "Counts responses to snoops indicating the line will now be (I)nvalidated in this core's caches without being forwarded back to the requestor. The line was in Forward, Shared or Exclusive (FSE) state in this cores caches. A single snoop response from the core counts on all hyperthreads of the core.", "PublicDescription": "Counts responses to snoops indicating the line will now be (I)nvalidated in this core's caches without being forwarded back to the requestor. The line was in Forward, Shared or Exclusive (FSE) state in this cores caches. A single snoop response from the core counts on all hyperthreads of the core.",
...@@ -49,6 +55,7 @@ ...@@ -49,6 +55,7 @@
}, },
{ {
"BriefDescription": "Line not found snoop reply", "BriefDescription": "Line not found snoop reply",
"Counter": "0,1,2,3",
"EventCode": "0xef", "EventCode": "0xef",
"EventName": "CORE_SNOOP_RESPONSE.MISS", "EventName": "CORE_SNOOP_RESPONSE.MISS",
"PublicDescription": "Counts responses to snoops indicating that the data was not found (IHitI) in this core's caches. A single snoop response from the core counts on all hyperthreads of the Core.", "PublicDescription": "Counts responses to snoops indicating that the data was not found (IHitI) in this core's caches. A single snoop response from the core counts on all hyperthreads of the Core.",
...@@ -57,6 +64,7 @@ ...@@ -57,6 +64,7 @@
}, },
{ {
"BriefDescription": "Hit snoop reply with data, line kept in Shared state.", "BriefDescription": "Hit snoop reply with data, line kept in Shared state.",
"Counter": "0,1,2,3",
"EventCode": "0xef", "EventCode": "0xef",
"EventName": "CORE_SNOOP_RESPONSE.S_FWD_FE", "EventName": "CORE_SNOOP_RESPONSE.S_FWD_FE",
"PublicDescription": "Counts responses to snoops indicating the line may be kept on this core in the (S)hared state, after the data is forwarded back to the requestor, initially the data was found in the cache in the (FS) Forward or Shared state. A single snoop response from the core counts on all hyperthreads of the core.", "PublicDescription": "Counts responses to snoops indicating the line may be kept on this core in the (S)hared state, after the data is forwarded back to the requestor, initially the data was found in the cache in the (FS) Forward or Shared state. A single snoop response from the core counts on all hyperthreads of the core.",
...@@ -65,6 +73,7 @@ ...@@ -65,6 +73,7 @@
}, },
{ {
"BriefDescription": "HitM snoop reply with data, line kept in Shared state", "BriefDescription": "HitM snoop reply with data, line kept in Shared state",
"Counter": "0,1,2,3",
"EventCode": "0xef", "EventCode": "0xef",
"EventName": "CORE_SNOOP_RESPONSE.S_FWD_M", "EventName": "CORE_SNOOP_RESPONSE.S_FWD_M",
"PublicDescription": "Counts responses to snoops indicating the line may be kept on this core in the (S)hared state, after the data is forwarded back to the requestor, initially the data was found in the cache in the (M)odified state. A single snoop response from the core counts on all hyperthreads of the core.", "PublicDescription": "Counts responses to snoops indicating the line may be kept on this core in the (S)hared state, after the data is forwarded back to the requestor, initially the data was found in the cache in the (M)odified state. A single snoop response from the core counts on all hyperthreads of the core.",
...@@ -73,6 +82,7 @@ ...@@ -73,6 +82,7 @@
}, },
{ {
"BriefDescription": "Hit snoop reply without sending the data, line kept in Shared state.", "BriefDescription": "Hit snoop reply without sending the data, line kept in Shared state.",
"Counter": "0,1,2,3",
"EventCode": "0xef", "EventCode": "0xef",
"EventName": "CORE_SNOOP_RESPONSE.S_HIT_FSE", "EventName": "CORE_SNOOP_RESPONSE.S_HIT_FSE",
"PublicDescription": "Counts responses to snoops indicating the line was kept on this core in the (S)hared state, and that the data was found unmodified but not forwarded back to the requestor, initially the data was found in the cache in the (FSE) Forward, Shared state or Exclusive state. A single snoop response from the core counts on all hyperthreads of the core.", "PublicDescription": "Counts responses to snoops indicating the line was kept on this core in the (S)hared state, and that the data was found unmodified but not forwarded back to the requestor, initially the data was found in the cache in the (FSE) Forward, Shared state or Exclusive state. A single snoop response from the core counts on all hyperthreads of the core.",
...@@ -81,6 +91,7 @@ ...@@ -81,6 +91,7 @@
}, },
{ {
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response.", "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.ANY_RESPONSE", "EventName": "OCR.DEMAND_CODE_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -90,6 +101,7 @@ ...@@ -90,6 +101,7 @@
}, },
{ {
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM.", "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.DRAM", "EventName": "OCR.DEMAND_CODE_RD.DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -99,6 +111,7 @@ ...@@ -99,6 +111,7 @@
}, },
{ {
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.", "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.LOCAL_DRAM", "EventName": "OCR.DEMAND_CODE_RD.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -108,6 +121,7 @@ ...@@ -108,6 +121,7 @@
}, },
{ {
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.", "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.SNC_DRAM", "EventName": "OCR.DEMAND_CODE_RD.SNC_DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -117,6 +131,7 @@ ...@@ -117,6 +131,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that have any type of response.", "BriefDescription": "Counts demand data reads that have any type of response.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE", "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -126,6 +141,7 @@ ...@@ -126,6 +141,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that were supplied by DRAM.", "BriefDescription": "Counts demand data reads that were supplied by DRAM.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.DRAM", "EventName": "OCR.DEMAND_DATA_RD.DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -135,6 +151,7 @@ ...@@ -135,6 +151,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.", "BriefDescription": "Counts demand data reads that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.LOCAL_DRAM", "EventName": "OCR.DEMAND_DATA_RD.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -144,6 +161,7 @@ ...@@ -144,6 +161,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those PMM accesses that are controlled by the close SNC Cluster.", "BriefDescription": "Counts demand data reads that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those PMM accesses that are controlled by the close SNC Cluster.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.LOCAL_PMM", "EventName": "OCR.DEMAND_DATA_RD.LOCAL_PMM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -153,6 +171,7 @@ ...@@ -153,6 +171,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that were supplied by PMM.", "BriefDescription": "Counts demand data reads that were supplied by PMM.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.PMM", "EventName": "OCR.DEMAND_DATA_RD.PMM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -162,6 +181,7 @@ ...@@ -162,6 +181,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that were supplied by DRAM attached to another socket.", "BriefDescription": "Counts demand data reads that were supplied by DRAM attached to another socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.REMOTE_DRAM", "EventName": "OCR.DEMAND_DATA_RD.REMOTE_DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -171,6 +191,7 @@ ...@@ -171,6 +191,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that were supplied by PMM attached to another socket.", "BriefDescription": "Counts demand data reads that were supplied by PMM attached to another socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.REMOTE_PMM", "EventName": "OCR.DEMAND_DATA_RD.REMOTE_PMM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -180,6 +201,7 @@ ...@@ -180,6 +201,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.", "BriefDescription": "Counts demand data reads that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.SNC_DRAM", "EventName": "OCR.DEMAND_DATA_RD.SNC_DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -189,6 +211,7 @@ ...@@ -189,6 +211,7 @@
}, },
{ {
"BriefDescription": "Counts demand data reads that were supplied by PMM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.", "BriefDescription": "Counts demand data reads that were supplied by PMM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.SNC_PMM", "EventName": "OCR.DEMAND_DATA_RD.SNC_PMM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -198,6 +221,7 @@ ...@@ -198,6 +221,7 @@
}, },
{ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.", "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.ANY_RESPONSE", "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -207,6 +231,7 @@ ...@@ -207,6 +231,7 @@
}, },
{ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM.", "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.DRAM", "EventName": "OCR.DEMAND_RFO.DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -216,6 +241,7 @@ ...@@ -216,6 +241,7 @@
}, },
{ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.", "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.LOCAL_DRAM", "EventName": "OCR.DEMAND_RFO.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -225,6 +251,7 @@ ...@@ -225,6 +251,7 @@
}, },
{ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those PMM accesses that are controlled by the close SNC Cluster.", "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those PMM accesses that are controlled by the close SNC Cluster.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.LOCAL_PMM", "EventName": "OCR.DEMAND_RFO.LOCAL_PMM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -234,6 +261,7 @@ ...@@ -234,6 +261,7 @@
}, },
{ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM.", "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.PMM", "EventName": "OCR.DEMAND_RFO.PMM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -243,6 +271,7 @@ ...@@ -243,6 +271,7 @@
}, },
{ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM attached to another socket.", "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM attached to another socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.REMOTE_PMM", "EventName": "OCR.DEMAND_RFO.REMOTE_PMM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -252,6 +281,7 @@ ...@@ -252,6 +281,7 @@
}, },
{ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.", "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.SNC_DRAM", "EventName": "OCR.DEMAND_RFO.SNC_DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -261,6 +291,7 @@ ...@@ -261,6 +291,7 @@
}, },
{ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.", "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.SNC_PMM", "EventName": "OCR.DEMAND_RFO.SNC_PMM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -270,6 +301,7 @@ ...@@ -270,6 +301,7 @@
}, },
{ {
"BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were supplied by DRAM.", "BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were supplied by DRAM.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L1D_AND_SWPF.DRAM", "EventName": "OCR.HWPF_L1D_AND_SWPF.DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -279,6 +311,7 @@ ...@@ -279,6 +311,7 @@
}, },
{ {
"BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.", "BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L1D_AND_SWPF.LOCAL_DRAM", "EventName": "OCR.HWPF_L1D_AND_SWPF.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -288,6 +321,7 @@ ...@@ -288,6 +321,7 @@
}, },
{ {
"BriefDescription": "Counts hardware prefetch (which bring data to L2) that have any type of response.", "BriefDescription": "Counts hardware prefetch (which bring data to L2) that have any type of response.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L2.ANY_RESPONSE", "EventName": "OCR.HWPF_L2.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -297,6 +331,7 @@ ...@@ -297,6 +331,7 @@
}, },
{ {
"BriefDescription": "Counts hardware prefetches to the L3 only that have any type of response.", "BriefDescription": "Counts hardware prefetches to the L3 only that have any type of response.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L3.ANY_RESPONSE", "EventName": "OCR.HWPF_L3.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -306,6 +341,7 @@ ...@@ -306,6 +341,7 @@
}, },
{ {
"BriefDescription": "Counts hardware prefetches to the L3 only that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline was homed in a remote socket.", "BriefDescription": "Counts hardware prefetches to the L3 only that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline was homed in a remote socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L3.REMOTE", "EventName": "OCR.HWPF_L3.REMOTE",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -315,6 +351,7 @@ ...@@ -315,6 +351,7 @@
}, },
{ {
"BriefDescription": "Counts full cacheline writes (ItoM) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline was homed in a remote socket.", "BriefDescription": "Counts full cacheline writes (ItoM) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline was homed in a remote socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.ITOM.REMOTE", "EventName": "OCR.ITOM.REMOTE",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -324,6 +361,7 @@ ...@@ -324,6 +361,7 @@
}, },
{ {
"BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that have any type of response.", "BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that have any type of response.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.ANY_RESPONSE", "EventName": "OCR.OTHER.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -333,6 +371,7 @@ ...@@ -333,6 +371,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have any type of response.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have any type of response.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.ANY_RESPONSE", "EventName": "OCR.READS_TO_CORE.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -342,6 +381,7 @@ ...@@ -342,6 +381,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.DRAM", "EventName": "OCR.READS_TO_CORE.DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -351,6 +391,7 @@ ...@@ -351,6 +391,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.LOCAL_DRAM", "EventName": "OCR.READS_TO_CORE.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -360,6 +401,7 @@ ...@@ -360,6 +401,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those PMM accesses that are controlled by the close SNC Cluster.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those PMM accesses that are controlled by the close SNC Cluster.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.LOCAL_PMM", "EventName": "OCR.READS_TO_CORE.LOCAL_PMM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -369,6 +411,7 @@ ...@@ -369,6 +411,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts DRAM accesses that are controlled by the close or distant SNC Cluster.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts DRAM accesses that are controlled by the close or distant SNC Cluster.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.LOCAL_SOCKET_DRAM", "EventName": "OCR.READS_TO_CORE.LOCAL_SOCKET_DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -378,6 +421,7 @@ ...@@ -378,6 +421,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts PMM accesses that are controlled by the close or distant SNC Cluster.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts PMM accesses that are controlled by the close or distant SNC Cluster.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.LOCAL_SOCKET_PMM", "EventName": "OCR.READS_TO_CORE.LOCAL_SOCKET_PMM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -387,6 +431,7 @@ ...@@ -387,6 +431,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by a remote socket.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by a remote socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.REMOTE", "EventName": "OCR.READS_TO_CORE.REMOTE",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -396,6 +441,7 @@ ...@@ -396,6 +441,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to another socket.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to another socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.REMOTE_DRAM", "EventName": "OCR.READS_TO_CORE.REMOTE_DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -405,6 +451,7 @@ ...@@ -405,6 +451,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM or PMM attached to another socket.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM or PMM attached to another socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.REMOTE_MEMORY", "EventName": "OCR.READS_TO_CORE.REMOTE_MEMORY",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -414,6 +461,7 @@ ...@@ -414,6 +461,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to another socket.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to another socket.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.REMOTE_PMM", "EventName": "OCR.READS_TO_CORE.REMOTE_PMM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -423,6 +471,7 @@ ...@@ -423,6 +471,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.SNC_DRAM", "EventName": "OCR.READS_TO_CORE.SNC_DRAM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -432,6 +481,7 @@ ...@@ -432,6 +481,7 @@
}, },
{ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.", "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.SNC_PMM", "EventName": "OCR.READS_TO_CORE.SNC_PMM",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -441,6 +491,7 @@ ...@@ -441,6 +491,7 @@
}, },
{ {
"BriefDescription": "Counts streaming stores that have any type of response.", "BriefDescription": "Counts streaming stores that have any type of response.",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.STREAMING_WR.ANY_RESPONSE", "EventName": "OCR.STREAMING_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
...@@ -450,6 +501,7 @@ ...@@ -450,6 +501,7 @@
}, },
{ {
"BriefDescription": "Counts Demand RFOs, ItoM's, PREFECTHW's, Hardware RFO Prefetches to the L1/L2 and Streaming stores that likely resulted in a store to Memory (DRAM or PMM)", "BriefDescription": "Counts Demand RFOs, ItoM's, PREFECTHW's, Hardware RFO Prefetches to the L1/L2 and Streaming stores that likely resulted in a store to Memory (DRAM or PMM)",
"Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB", "EventCode": "0xB7, 0xBB",
"EventName": "OCR.WRITE_ESTIMATE.MEMORY", "EventName": "OCR.WRITE_ESTIMATE.MEMORY",
"MSRIndex": "0x1a6,0x1a7", "MSRIndex": "0x1a6,0x1a7",
......
[ [
{ {
"BriefDescription": "Cycles when divide unit is busy executing divide or square root operations.", "BriefDescription": "Cycles when divide unit is busy executing divide or square root operations.",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0x14", "EventCode": "0x14",
"EventName": "ARITH.DIVIDER_ACTIVE", "EventName": "ARITH.DIVIDER_ACTIVE",
...@@ -10,6 +11,7 @@ ...@@ -10,6 +11,7 @@
}, },
{ {
"BriefDescription": "Number of occurrences where a microcode assist is invoked by hardware.", "BriefDescription": "Number of occurrences where a microcode assist is invoked by hardware.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc1", "EventCode": "0xc1",
"EventName": "ASSISTS.ANY", "EventName": "ASSISTS.ANY",
"PublicDescription": "Counts the number of occurrences where a microcode assist is invoked by hardware Examples include AD (page Access Dirty), FP and AVX related assists.", "PublicDescription": "Counts the number of occurrences where a microcode assist is invoked by hardware Examples include AD (page Access Dirty), FP and AVX related assists.",
...@@ -18,6 +20,7 @@ ...@@ -18,6 +20,7 @@
}, },
{ {
"BriefDescription": "All branch instructions retired.", "BriefDescription": "All branch instructions retired.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4", "EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES", "EventName": "BR_INST_RETIRED.ALL_BRANCHES",
"PEBS": "1", "PEBS": "1",
...@@ -26,6 +29,7 @@ ...@@ -26,6 +29,7 @@
}, },
{ {
"BriefDescription": "Conditional branch instructions retired.", "BriefDescription": "Conditional branch instructions retired.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4", "EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND", "EventName": "BR_INST_RETIRED.COND",
"PEBS": "1", "PEBS": "1",
...@@ -35,6 +39,7 @@ ...@@ -35,6 +39,7 @@
}, },
{ {
"BriefDescription": "Not taken branch instructions retired.", "BriefDescription": "Not taken branch instructions retired.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4", "EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND_NTAKEN", "EventName": "BR_INST_RETIRED.COND_NTAKEN",
"PEBS": "1", "PEBS": "1",
...@@ -44,6 +49,7 @@ ...@@ -44,6 +49,7 @@
}, },
{ {
"BriefDescription": "Taken conditional branch instructions retired.", "BriefDescription": "Taken conditional branch instructions retired.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4", "EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND_TAKEN", "EventName": "BR_INST_RETIRED.COND_TAKEN",
"PEBS": "1", "PEBS": "1",
...@@ -53,6 +59,7 @@ ...@@ -53,6 +59,7 @@
}, },
{ {
"BriefDescription": "Far branch instructions retired.", "BriefDescription": "Far branch instructions retired.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4", "EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH", "EventName": "BR_INST_RETIRED.FAR_BRANCH",
"PEBS": "1", "PEBS": "1",
...@@ -62,6 +69,7 @@ ...@@ -62,6 +69,7 @@
}, },
{ {
"BriefDescription": "Indirect near branch instructions retired (excluding returns)", "BriefDescription": "Indirect near branch instructions retired (excluding returns)",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4", "EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.INDIRECT", "EventName": "BR_INST_RETIRED.INDIRECT",
"PEBS": "1", "PEBS": "1",
...@@ -71,6 +79,7 @@ ...@@ -71,6 +79,7 @@
}, },
{ {
"BriefDescription": "Direct and indirect near call instructions retired.", "BriefDescription": "Direct and indirect near call instructions retired.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4", "EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_CALL", "EventName": "BR_INST_RETIRED.NEAR_CALL",
"PEBS": "1", "PEBS": "1",
...@@ -80,6 +89,7 @@ ...@@ -80,6 +89,7 @@
}, },
{ {
"BriefDescription": "Return instructions retired.", "BriefDescription": "Return instructions retired.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4", "EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_RETURN", "EventName": "BR_INST_RETIRED.NEAR_RETURN",
"PEBS": "1", "PEBS": "1",
...@@ -89,6 +99,7 @@ ...@@ -89,6 +99,7 @@
}, },
{ {
"BriefDescription": "Taken branch instructions retired.", "BriefDescription": "Taken branch instructions retired.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4", "EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_TAKEN", "EventName": "BR_INST_RETIRED.NEAR_TAKEN",
"PEBS": "1", "PEBS": "1",
...@@ -98,6 +109,7 @@ ...@@ -98,6 +109,7 @@
}, },
{ {
"BriefDescription": "All mispredicted branch instructions retired.", "BriefDescription": "All mispredicted branch instructions retired.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5", "EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES", "EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
"PEBS": "1", "PEBS": "1",
...@@ -106,6 +118,7 @@ ...@@ -106,6 +118,7 @@
}, },
{ {
"BriefDescription": "Mispredicted conditional branch instructions retired.", "BriefDescription": "Mispredicted conditional branch instructions retired.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5", "EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND", "EventName": "BR_MISP_RETIRED.COND",
"PEBS": "1", "PEBS": "1",
...@@ -115,6 +128,7 @@ ...@@ -115,6 +128,7 @@
}, },
{ {
"BriefDescription": "Mispredicted non-taken conditional branch instructions retired.", "BriefDescription": "Mispredicted non-taken conditional branch instructions retired.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5", "EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND_NTAKEN", "EventName": "BR_MISP_RETIRED.COND_NTAKEN",
"PEBS": "1", "PEBS": "1",
...@@ -124,6 +138,7 @@ ...@@ -124,6 +138,7 @@
}, },
{ {
"BriefDescription": "number of branch instructions retired that were mispredicted and taken.", "BriefDescription": "number of branch instructions retired that were mispredicted and taken.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5", "EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND_TAKEN", "EventName": "BR_MISP_RETIRED.COND_TAKEN",
"PEBS": "1", "PEBS": "1",
...@@ -133,6 +148,7 @@ ...@@ -133,6 +148,7 @@
}, },
{ {
"BriefDescription": "All miss-predicted indirect branch instructions retired (excluding RETs. TSX aborts is considered indirect branch).", "BriefDescription": "All miss-predicted indirect branch instructions retired (excluding RETs. TSX aborts is considered indirect branch).",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5", "EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.INDIRECT", "EventName": "BR_MISP_RETIRED.INDIRECT",
"PEBS": "1", "PEBS": "1",
...@@ -142,6 +158,7 @@ ...@@ -142,6 +158,7 @@
}, },
{ {
"BriefDescription": "Mispredicted indirect CALL instructions retired.", "BriefDescription": "Mispredicted indirect CALL instructions retired.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5", "EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.INDIRECT_CALL", "EventName": "BR_MISP_RETIRED.INDIRECT_CALL",
"PEBS": "1", "PEBS": "1",
...@@ -151,6 +168,7 @@ ...@@ -151,6 +168,7 @@
}, },
{ {
"BriefDescription": "Number of near branch instructions retired that were mispredicted and taken.", "BriefDescription": "Number of near branch instructions retired that were mispredicted and taken.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5", "EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.NEAR_TAKEN", "EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
"PEBS": "1", "PEBS": "1",
...@@ -160,6 +178,7 @@ ...@@ -160,6 +178,7 @@
}, },
{ {
"BriefDescription": "This event counts the number of mispredicted ret instructions retired. Non PEBS", "BriefDescription": "This event counts the number of mispredicted ret instructions retired. Non PEBS",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5", "EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.RET", "EventName": "BR_MISP_RETIRED.RET",
"PEBS": "1", "PEBS": "1",
...@@ -169,6 +188,7 @@ ...@@ -169,6 +188,7 @@
}, },
{ {
"BriefDescription": "Cycle counts are evenly distributed between active threads in the Core.", "BriefDescription": "Cycle counts are evenly distributed between active threads in the Core.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xec", "EventCode": "0xec",
"EventName": "CPU_CLK_UNHALTED.DISTRIBUTED", "EventName": "CPU_CLK_UNHALTED.DISTRIBUTED",
"PublicDescription": "This event distributes cycle counts between active hyperthreads, i.e., those in C0. A hyperthread becomes inactive when it executes the HLT or MWAIT instructions. If all other hyperthreads are inactive (or disabled or do not exist), all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthread.", "PublicDescription": "This event distributes cycle counts between active hyperthreads, i.e., those in C0. A hyperthread becomes inactive when it executes the HLT or MWAIT instructions. If all other hyperthreads are inactive (or disabled or do not exist), all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthread.",
...@@ -177,6 +197,7 @@ ...@@ -177,6 +197,7 @@
}, },
{ {
"BriefDescription": "Core crystal clock cycles when this thread is unhalted and the other thread is halted.", "BriefDescription": "Core crystal clock cycles when this thread is unhalted and the other thread is halted.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3C", "EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE", "EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE",
"PublicDescription": "Counts Core crystal clock cycles when current thread is unhalted and the other thread is halted.", "PublicDescription": "Counts Core crystal clock cycles when current thread is unhalted and the other thread is halted.",
...@@ -185,6 +206,7 @@ ...@@ -185,6 +206,7 @@
}, },
{ {
"BriefDescription": "Core crystal clock cycles. Cycle counts are evenly distributed between active threads in the Core.", "BriefDescription": "Core crystal clock cycles. Cycle counts are evenly distributed between active threads in the Core.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3c", "EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.REF_DISTRIBUTED", "EventName": "CPU_CLK_UNHALTED.REF_DISTRIBUTED",
"PublicDescription": "This event distributes Core crystal clock cycle counts between active hyperthreads, i.e., those in C0 sleep-state. A hyperthread becomes inactive when it executes the HLT or MWAIT instructions. If one thread is active in a core, all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthread.", "PublicDescription": "This event distributes Core crystal clock cycle counts between active hyperthreads, i.e., those in C0 sleep-state. A hyperthread becomes inactive when it executes the HLT or MWAIT instructions. If one thread is active in a core, all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthread.",
...@@ -193,6 +215,7 @@ ...@@ -193,6 +215,7 @@
}, },
{ {
"BriefDescription": "Reference cycles when the core is not in halt state.", "BriefDescription": "Reference cycles when the core is not in halt state.",
"Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC", "EventName": "CPU_CLK_UNHALTED.REF_TSC",
"PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.", "PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
"SampleAfterValue": "2000003", "SampleAfterValue": "2000003",
...@@ -200,6 +223,7 @@ ...@@ -200,6 +223,7 @@
}, },
{ {
"BriefDescription": "Core crystal clock cycles when the thread is unhalted.", "BriefDescription": "Core crystal clock cycles when the thread is unhalted.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3C", "EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK", "EventName": "CPU_CLK_UNHALTED.REF_XCLK",
"PublicDescription": "Counts core crystal clock cycles when the thread is unhalted.", "PublicDescription": "Counts core crystal clock cycles when the thread is unhalted.",
...@@ -208,6 +232,7 @@ ...@@ -208,6 +232,7 @@
}, },
{ {
"BriefDescription": "Core cycles when the thread is not in halt state", "BriefDescription": "Core cycles when the thread is not in halt state",
"Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD", "EventName": "CPU_CLK_UNHALTED.THREAD",
"PublicDescription": "Counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events.", "PublicDescription": "Counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events.",
"SampleAfterValue": "2000003", "SampleAfterValue": "2000003",
...@@ -215,6 +240,7 @@ ...@@ -215,6 +240,7 @@
}, },
{ {
"BriefDescription": "Thread cycles when thread is not in halt state", "BriefDescription": "Thread cycles when thread is not in halt state",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3C", "EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P", "EventName": "CPU_CLK_UNHALTED.THREAD_P",
"PublicDescription": "This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time.", "PublicDescription": "This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time.",
...@@ -222,6 +248,7 @@ ...@@ -222,6 +248,7 @@
}, },
{ {
"BriefDescription": "Cycles while L1 cache miss demand load is outstanding.", "BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
"Counter": "0,1,2,3",
"CounterMask": "8", "CounterMask": "8",
"EventCode": "0xA3", "EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L1D_MISS", "EventName": "CYCLE_ACTIVITY.CYCLES_L1D_MISS",
...@@ -230,6 +257,7 @@ ...@@ -230,6 +257,7 @@
}, },
{ {
"BriefDescription": "Cycles while L2 cache miss demand load is outstanding.", "BriefDescription": "Cycles while L2 cache miss demand load is outstanding.",
"Counter": "0,1,2,3",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0xA3", "EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L2_MISS", "EventName": "CYCLE_ACTIVITY.CYCLES_L2_MISS",
...@@ -238,6 +266,7 @@ ...@@ -238,6 +266,7 @@
}, },
{ {
"BriefDescription": "Cycles while memory subsystem has an outstanding load.", "BriefDescription": "Cycles while memory subsystem has an outstanding load.",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "16", "CounterMask": "16",
"EventCode": "0xA3", "EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_MEM_ANY", "EventName": "CYCLE_ACTIVITY.CYCLES_MEM_ANY",
...@@ -246,6 +275,7 @@ ...@@ -246,6 +275,7 @@
}, },
{ {
"BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.", "BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
"Counter": "0,1,2,3",
"CounterMask": "12", "CounterMask": "12",
"EventCode": "0xA3", "EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L1D_MISS", "EventName": "CYCLE_ACTIVITY.STALLS_L1D_MISS",
...@@ -254,6 +284,7 @@ ...@@ -254,6 +284,7 @@
}, },
{ {
"BriefDescription": "Execution stalls while L2 cache miss demand load is outstanding.", "BriefDescription": "Execution stalls while L2 cache miss demand load is outstanding.",
"Counter": "0,1,2,3",
"CounterMask": "5", "CounterMask": "5",
"EventCode": "0xa3", "EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_L2_MISS", "EventName": "CYCLE_ACTIVITY.STALLS_L2_MISS",
...@@ -262,6 +293,7 @@ ...@@ -262,6 +293,7 @@
}, },
{ {
"BriefDescription": "Execution stalls while memory subsystem has an outstanding load.", "BriefDescription": "Execution stalls while memory subsystem has an outstanding load.",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "20", "CounterMask": "20",
"EventCode": "0xa3", "EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_MEM_ANY", "EventName": "CYCLE_ACTIVITY.STALLS_MEM_ANY",
...@@ -270,6 +302,7 @@ ...@@ -270,6 +302,7 @@
}, },
{ {
"BriefDescription": "Total execution stalls.", "BriefDescription": "Total execution stalls.",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "4", "CounterMask": "4",
"EventCode": "0xa3", "EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_TOTAL", "EventName": "CYCLE_ACTIVITY.STALLS_TOTAL",
...@@ -278,6 +311,7 @@ ...@@ -278,6 +311,7 @@
}, },
{ {
"BriefDescription": "Cycles total of 1 uop is executed on all ports and Reservation Station was not empty.", "BriefDescription": "Cycles total of 1 uop is executed on all ports and Reservation Station was not empty.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6", "EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.1_PORTS_UTIL", "EventName": "EXE_ACTIVITY.1_PORTS_UTIL",
"PublicDescription": "Counts cycles during which a total of 1 uop was executed on all ports and Reservation Station (RS) was not empty.", "PublicDescription": "Counts cycles during which a total of 1 uop was executed on all ports and Reservation Station (RS) was not empty.",
...@@ -286,6 +320,7 @@ ...@@ -286,6 +320,7 @@
}, },
{ {
"BriefDescription": "Cycles total of 2 uops are executed on all ports and Reservation Station was not empty.", "BriefDescription": "Cycles total of 2 uops are executed on all ports and Reservation Station was not empty.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6", "EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.2_PORTS_UTIL", "EventName": "EXE_ACTIVITY.2_PORTS_UTIL",
"PublicDescription": "Counts cycles during which a total of 2 uops were executed on all ports and Reservation Station (RS) was not empty.", "PublicDescription": "Counts cycles during which a total of 2 uops were executed on all ports and Reservation Station (RS) was not empty.",
...@@ -294,6 +329,7 @@ ...@@ -294,6 +329,7 @@
}, },
{ {
"BriefDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station was not empty.", "BriefDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station was not empty.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6", "EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.3_PORTS_UTIL", "EventName": "EXE_ACTIVITY.3_PORTS_UTIL",
"PublicDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station (RS) was not empty.", "PublicDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station (RS) was not empty.",
...@@ -302,6 +338,7 @@ ...@@ -302,6 +338,7 @@
}, },
{ {
"BriefDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station was not empty.", "BriefDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station was not empty.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6", "EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.4_PORTS_UTIL", "EventName": "EXE_ACTIVITY.4_PORTS_UTIL",
"PublicDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station (RS) was not empty.", "PublicDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station (RS) was not empty.",
...@@ -310,6 +347,7 @@ ...@@ -310,6 +347,7 @@
}, },
{ {
"BriefDescription": "Cycles where the Store Buffer was full and no loads caused an execution stall.", "BriefDescription": "Cycles where the Store Buffer was full and no loads caused an execution stall.",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "2", "CounterMask": "2",
"EventCode": "0xA6", "EventCode": "0xA6",
"EventName": "EXE_ACTIVITY.BOUND_ON_STORES", "EventName": "EXE_ACTIVITY.BOUND_ON_STORES",
...@@ -319,6 +357,7 @@ ...@@ -319,6 +357,7 @@
}, },
{ {
"BriefDescription": "Stalls caused by changing prefix length of the instruction. [This event is alias to DECODE.LCP]", "BriefDescription": "Stalls caused by changing prefix length of the instruction. [This event is alias to DECODE.LCP]",
"Counter": "0,1,2,3",
"EventCode": "0x87", "EventCode": "0x87",
"EventName": "ILD_STALL.LCP", "EventName": "ILD_STALL.LCP",
"PublicDescription": "Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk. [This event is alias to DECODE.LCP]", "PublicDescription": "Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk. [This event is alias to DECODE.LCP]",
...@@ -327,6 +366,7 @@ ...@@ -327,6 +366,7 @@
}, },
{ {
"BriefDescription": "Instruction decoders utilized in a cycle", "BriefDescription": "Instruction decoders utilized in a cycle",
"Counter": "0,1,2,3",
"EventCode": "0x55", "EventCode": "0x55",
"EventName": "INST_DECODED.DECODERS", "EventName": "INST_DECODED.DECODERS",
"PublicDescription": "Number of decoders utilized in a cycle when the MITE (legacy decode pipeline) fetches instructions.", "PublicDescription": "Number of decoders utilized in a cycle when the MITE (legacy decode pipeline) fetches instructions.",
...@@ -335,6 +375,7 @@ ...@@ -335,6 +375,7 @@
}, },
{ {
"BriefDescription": "Number of instructions retired. Fixed Counter - architectural event", "BriefDescription": "Number of instructions retired. Fixed Counter - architectural event",
"Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY", "EventName": "INST_RETIRED.ANY",
"PEBS": "1", "PEBS": "1",
"PublicDescription": "Counts the number of instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.", "PublicDescription": "Counts the number of instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.",
...@@ -343,6 +384,7 @@ ...@@ -343,6 +384,7 @@
}, },
{ {
"BriefDescription": "Number of instructions retired. General Counter - architectural event", "BriefDescription": "Number of instructions retired. General Counter - architectural event",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc0", "EventCode": "0xc0",
"EventName": "INST_RETIRED.ANY_P", "EventName": "INST_RETIRED.ANY_P",
"PEBS": "1", "PEBS": "1",
...@@ -351,6 +393,7 @@ ...@@ -351,6 +393,7 @@
}, },
{ {
"BriefDescription": "Number of all retired NOP instructions.", "BriefDescription": "Number of all retired NOP instructions.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc0", "EventCode": "0xc0",
"EventName": "INST_RETIRED.NOP", "EventName": "INST_RETIRED.NOP",
"PEBS": "1", "PEBS": "1",
...@@ -359,6 +402,7 @@ ...@@ -359,6 +402,7 @@
}, },
{ {
"BriefDescription": "Precise instruction retired event with a reduced effect of PEBS shadow in IP distribution", "BriefDescription": "Precise instruction retired event with a reduced effect of PEBS shadow in IP distribution",
"Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.PREC_DIST", "EventName": "INST_RETIRED.PREC_DIST",
"PEBS": "1", "PEBS": "1",
"PublicDescription": "A version of INST_RETIRED that allows for a more unbiased distribution of samples across instructions retired. It utilizes the Precise Distribution of Instructions Retired (PDIR) feature to mitigate some bias in how retired instructions get sampled. Use on Fixed Counter 0.", "PublicDescription": "A version of INST_RETIRED that allows for a more unbiased distribution of samples across instructions retired. It utilizes the Precise Distribution of Instructions Retired (PDIR) feature to mitigate some bias in how retired instructions get sampled. Use on Fixed Counter 0.",
...@@ -367,6 +411,7 @@ ...@@ -367,6 +411,7 @@
}, },
{ {
"BriefDescription": "Cycles the Backend cluster is recovering after a miss-speculation or a Store Buffer or Load Buffer drain stall.", "BriefDescription": "Cycles the Backend cluster is recovering after a miss-speculation or a Store Buffer or Load Buffer drain stall.",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0x0D", "EventCode": "0x0D",
"EventName": "INT_MISC.ALL_RECOVERY_CYCLES", "EventName": "INT_MISC.ALL_RECOVERY_CYCLES",
...@@ -376,6 +421,7 @@ ...@@ -376,6 +421,7 @@
}, },
{ {
"BriefDescription": "Clears speculative count", "BriefDescription": "Clears speculative count",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1", "CounterMask": "1",
"EdgeDetect": "1", "EdgeDetect": "1",
"EventCode": "0x0D", "EventCode": "0x0D",
...@@ -386,6 +432,7 @@ ...@@ -386,6 +432,7 @@
}, },
{ {
"BriefDescription": "Counts cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.", "BriefDescription": "Counts cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x0d", "EventCode": "0x0d",
"EventName": "INT_MISC.CLEAR_RESTEER_CYCLES", "EventName": "INT_MISC.CLEAR_RESTEER_CYCLES",
"PublicDescription": "Cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.", "PublicDescription": "Cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.",
...@@ -394,6 +441,7 @@ ...@@ -394,6 +441,7 @@
}, },
{ {
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for this thread", "BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for this thread",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x0D", "EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES", "EventName": "INT_MISC.RECOVERY_CYCLES",
"PublicDescription": "Counts core cycles when the Resource allocator was stalled due to recovery from an earlier branch misprediction or machine clear event.", "PublicDescription": "Counts core cycles when the Resource allocator was stalled due to recovery from an earlier branch misprediction or machine clear event.",
...@@ -402,6 +450,7 @@ ...@@ -402,6 +450,7 @@
}, },
{ {
"BriefDescription": "TMA slots where uops got dropped", "BriefDescription": "TMA slots where uops got dropped",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x0d", "EventCode": "0x0d",
"EventName": "INT_MISC.UOP_DROPPING", "EventName": "INT_MISC.UOP_DROPPING",
"PublicDescription": "Estimated number of Top-down Microarchitecture Analysis slots that got dropped due to non front-end reasons", "PublicDescription": "Estimated number of Top-down Microarchitecture Analysis slots that got dropped due to non front-end reasons",
...@@ -410,6 +459,7 @@ ...@@ -410,6 +459,7 @@
}, },
{ {
"BriefDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.", "BriefDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
"Counter": "0,1,2,3",
"EventCode": "0x03", "EventCode": "0x03",
"EventName": "LD_BLOCKS.NO_SR", "EventName": "LD_BLOCKS.NO_SR",
"PublicDescription": "Counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.", "PublicDescription": "Counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
...@@ -418,6 +468,7 @@ ...@@ -418,6 +468,7 @@
}, },
{ {
"BriefDescription": "Loads blocked due to overlapping with a preceding store that cannot be forwarded.", "BriefDescription": "Loads blocked due to overlapping with a preceding store that cannot be forwarded.",
"Counter": "0,1,2,3",
"EventCode": "0x03", "EventCode": "0x03",
"EventName": "LD_BLOCKS.STORE_FORWARD", "EventName": "LD_BLOCKS.STORE_FORWARD",
"PublicDescription": "Counts the number of times where store forwarding was prevented for a load operation. The most common case is a load blocked due to the address of memory access (partially) overlapping with a preceding uncompleted store. Note: See the table of not supported store forwards in the Optimization Guide.", "PublicDescription": "Counts the number of times where store forwarding was prevented for a load operation. The most common case is a load blocked due to the address of memory access (partially) overlapping with a preceding uncompleted store. Note: See the table of not supported store forwards in the Optimization Guide.",
...@@ -426,6 +477,7 @@ ...@@ -426,6 +477,7 @@
}, },
{ {
"BriefDescription": "False dependencies due to partial compare on address.", "BriefDescription": "False dependencies due to partial compare on address.",
"Counter": "0,1,2,3",
"EventCode": "0x07", "EventCode": "0x07",
"EventName": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS", "EventName": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS",
"PublicDescription": "Counts the number of times a load got blocked due to false dependencies due to partial compare on address.", "PublicDescription": "Counts the number of times a load got blocked due to false dependencies due to partial compare on address.",
...@@ -434,6 +486,7 @@ ...@@ -434,6 +486,7 @@
}, },
{ {
"BriefDescription": "Counts the number of demand load dispatches that hit L1D fill buffer (FB) allocated for software prefetch.", "BriefDescription": "Counts the number of demand load dispatches that hit L1D fill buffer (FB) allocated for software prefetch.",
"Counter": "0,1,2,3",
"EventCode": "0x4c", "EventCode": "0x4c",
"EventName": "LOAD_HIT_PREFETCH.SWPF", "EventName": "LOAD_HIT_PREFETCH.SWPF",
"PublicDescription": "Counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by ASM (Assembly File) inspection of the nearby instructions.", "PublicDescription": "Counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by ASM (Assembly File) inspection of the nearby instructions.",
...@@ -442,6 +495,7 @@ ...@@ -442,6 +495,7 @@
}, },
{ {
"BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder.", "BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder.",
"Counter": "0,1,2,3",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0xA8", "EventCode": "0xA8",
"EventName": "LSD.CYCLES_ACTIVE", "EventName": "LSD.CYCLES_ACTIVE",
...@@ -451,6 +505,7 @@ ...@@ -451,6 +505,7 @@
}, },
{ {
"BriefDescription": "Cycles optimal number of Uops delivered by the LSD, but did not come from the decoder.", "BriefDescription": "Cycles optimal number of Uops delivered by the LSD, but did not come from the decoder.",
"Counter": "0,1,2,3",
"CounterMask": "5", "CounterMask": "5",
"EventCode": "0xa8", "EventCode": "0xa8",
"EventName": "LSD.CYCLES_OK", "EventName": "LSD.CYCLES_OK",
...@@ -460,6 +515,7 @@ ...@@ -460,6 +515,7 @@
}, },
{ {
"BriefDescription": "Number of Uops delivered by the LSD.", "BriefDescription": "Number of Uops delivered by the LSD.",
"Counter": "0,1,2,3",
"EventCode": "0xa8", "EventCode": "0xa8",
"EventName": "LSD.UOPS", "EventName": "LSD.UOPS",
"PublicDescription": "Counts the number of uops delivered to the back-end by the LSD(Loop Stream Detector).", "PublicDescription": "Counts the number of uops delivered to the back-end by the LSD(Loop Stream Detector).",
...@@ -468,6 +524,7 @@ ...@@ -468,6 +524,7 @@
}, },
{ {
"BriefDescription": "Number of machine clears (nukes) of any type.", "BriefDescription": "Number of machine clears (nukes) of any type.",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1", "CounterMask": "1",
"EdgeDetect": "1", "EdgeDetect": "1",
"EventCode": "0xc3", "EventCode": "0xc3",
...@@ -478,6 +535,7 @@ ...@@ -478,6 +535,7 @@
}, },
{ {
"BriefDescription": "Self-modifying code (SMC) detected.", "BriefDescription": "Self-modifying code (SMC) detected.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc3", "EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.SMC", "EventName": "MACHINE_CLEARS.SMC",
"PublicDescription": "Counts self-modifying code (SMC) detected, which causes a machine clear.", "PublicDescription": "Counts self-modifying code (SMC) detected, which causes a machine clear.",
...@@ -486,6 +544,7 @@ ...@@ -486,6 +544,7 @@
}, },
{ {
"BriefDescription": "Increments whenever there is an update to the LBR array.", "BriefDescription": "Increments whenever there is an update to the LBR array.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xcc", "EventCode": "0xcc",
"EventName": "MISC_RETIRED.LBR_INSERTS", "EventName": "MISC_RETIRED.LBR_INSERTS",
"PublicDescription": "Increments when an entry is added to the Last Branch Record (LBR) array (or removed from the array in case of RETURNs in call stack mode). The event requires LBR to be enabled properly.", "PublicDescription": "Increments when an entry is added to the Last Branch Record (LBR) array (or removed from the array in case of RETURNs in call stack mode). The event requires LBR to be enabled properly.",
...@@ -494,6 +553,7 @@ ...@@ -494,6 +553,7 @@
}, },
{ {
"BriefDescription": "Number of retired PAUSE instructions. This event is not supported on first SKL and KBL products.", "BriefDescription": "Number of retired PAUSE instructions. This event is not supported on first SKL and KBL products.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xcc", "EventCode": "0xcc",
"EventName": "MISC_RETIRED.PAUSE_INST", "EventName": "MISC_RETIRED.PAUSE_INST",
"PublicDescription": "Counts number of retired PAUSE instructions. This event is not supported on first SKL and KBL products.", "PublicDescription": "Counts number of retired PAUSE instructions. This event is not supported on first SKL and KBL products.",
...@@ -502,6 +562,7 @@ ...@@ -502,6 +562,7 @@
}, },
{ {
"BriefDescription": "Cycles stalled due to no store buffers available. (not including draining form sync).", "BriefDescription": "Cycles stalled due to no store buffers available. (not including draining form sync).",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa2", "EventCode": "0xa2",
"EventName": "RESOURCE_STALLS.SB", "EventName": "RESOURCE_STALLS.SB",
"PublicDescription": "Counts allocation stall cycles caused by the store buffer (SB) being full. This counts cycles that the pipeline back-end blocked uop delivery from the front-end.", "PublicDescription": "Counts allocation stall cycles caused by the store buffer (SB) being full. This counts cycles that the pipeline back-end blocked uop delivery from the front-end.",
...@@ -510,6 +571,7 @@ ...@@ -510,6 +571,7 @@
}, },
{ {
"BriefDescription": "Counts cycles where the pipeline is stalled due to serializing operations.", "BriefDescription": "Counts cycles where the pipeline is stalled due to serializing operations.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa2", "EventCode": "0xa2",
"EventName": "RESOURCE_STALLS.SCOREBOARD", "EventName": "RESOURCE_STALLS.SCOREBOARD",
"SampleAfterValue": "100003", "SampleAfterValue": "100003",
...@@ -517,6 +579,7 @@ ...@@ -517,6 +579,7 @@
}, },
{ {
"BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread", "BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x5e", "EventCode": "0x5e",
"EventName": "RS_EVENTS.EMPTY_CYCLES", "EventName": "RS_EVENTS.EMPTY_CYCLES",
"PublicDescription": "Counts cycles during which the reservation station (RS) is empty for this logical processor. This is usually caused when the front-end pipeline runs into starvation periods (e.g. branch mispredictions or i-cache misses)", "PublicDescription": "Counts cycles during which the reservation station (RS) is empty for this logical processor. This is usually caused when the front-end pipeline runs into starvation periods (e.g. branch mispredictions or i-cache misses)",
...@@ -525,6 +588,7 @@ ...@@ -525,6 +588,7 @@
}, },
{ {
"BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty.", "BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty.",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1", "CounterMask": "1",
"EdgeDetect": "1", "EdgeDetect": "1",
"EventCode": "0x5E", "EventCode": "0x5E",
...@@ -536,6 +600,7 @@ ...@@ -536,6 +600,7 @@
}, },
{ {
"BriefDescription": "TMA slots where no uops were being issued due to lack of back-end resources.", "BriefDescription": "TMA slots where no uops were being issued due to lack of back-end resources.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa4", "EventCode": "0xa4",
"EventName": "TOPDOWN.BACKEND_BOUND_SLOTS", "EventName": "TOPDOWN.BACKEND_BOUND_SLOTS",
"PublicDescription": "Counts the number of Top-down Microarchitecture Analysis (TMA) method's slots where no micro-operations were being issued from front-end to back-end of the machine due to lack of back-end resources.", "PublicDescription": "Counts the number of Top-down Microarchitecture Analysis (TMA) method's slots where no micro-operations were being issued from front-end to back-end of the machine due to lack of back-end resources.",
...@@ -544,6 +609,7 @@ ...@@ -544,6 +609,7 @@
}, },
{ {
"BriefDescription": "TMA slots available for an unhalted logical processor. Fixed counter - architectural event", "BriefDescription": "TMA slots available for an unhalted logical processor. Fixed counter - architectural event",
"Counter": "Fixed counter 3",
"EventName": "TOPDOWN.SLOTS", "EventName": "TOPDOWN.SLOTS",
"PublicDescription": "Number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method (TMA). The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core. Software can use this event as the denominator for the top-level metrics of the TMA method. This architectural event is counted on a designated fixed counter (Fixed Counter 3).", "PublicDescription": "Number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method (TMA). The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core. Software can use this event as the denominator for the top-level metrics of the TMA method. This architectural event is counted on a designated fixed counter (Fixed Counter 3).",
"SampleAfterValue": "10000003", "SampleAfterValue": "10000003",
...@@ -551,6 +617,7 @@ ...@@ -551,6 +617,7 @@
}, },
{ {
"BriefDescription": "TMA slots available for an unhalted logical processor. General counter - architectural event", "BriefDescription": "TMA slots available for an unhalted logical processor. General counter - architectural event",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa4", "EventCode": "0xa4",
"EventName": "TOPDOWN.SLOTS_P", "EventName": "TOPDOWN.SLOTS_P",
"PublicDescription": "Counts the number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method. The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core.", "PublicDescription": "Counts the number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method. The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core.",
...@@ -559,6 +626,7 @@ ...@@ -559,6 +626,7 @@
}, },
{ {
"BriefDescription": "Number of uops decoded out of instructions exclusively fetched by decoder 0", "BriefDescription": "Number of uops decoded out of instructions exclusively fetched by decoder 0",
"Counter": "0,1,2,3",
"EventCode": "0x56", "EventCode": "0x56",
"EventName": "UOPS_DECODED.DEC0", "EventName": "UOPS_DECODED.DEC0",
"PublicDescription": "Uops exclusively fetched by decoder 0", "PublicDescription": "Uops exclusively fetched by decoder 0",
...@@ -567,6 +635,7 @@ ...@@ -567,6 +635,7 @@
}, },
{ {
"BriefDescription": "Number of uops executed on port 0", "BriefDescription": "Number of uops executed on port 0",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1", "EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_0", "EventName": "UOPS_DISPATCHED.PORT_0",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 0.", "PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 0.",
...@@ -575,6 +644,7 @@ ...@@ -575,6 +644,7 @@
}, },
{ {
"BriefDescription": "Number of uops executed on port 1", "BriefDescription": "Number of uops executed on port 1",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1", "EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_1", "EventName": "UOPS_DISPATCHED.PORT_1",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 1.", "PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 1.",
...@@ -583,6 +653,7 @@ ...@@ -583,6 +653,7 @@
}, },
{ {
"BriefDescription": "Number of uops executed on port 2 and 3", "BriefDescription": "Number of uops executed on port 2 and 3",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1", "EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_2_3", "EventName": "UOPS_DISPATCHED.PORT_2_3",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 2 and 3.", "PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 2 and 3.",
...@@ -591,6 +662,7 @@ ...@@ -591,6 +662,7 @@
}, },
{ {
"BriefDescription": "Number of uops executed on port 4 and 9", "BriefDescription": "Number of uops executed on port 4 and 9",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1", "EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_4_9", "EventName": "UOPS_DISPATCHED.PORT_4_9",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 5 and 9.", "PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 5 and 9.",
...@@ -599,6 +671,7 @@ ...@@ -599,6 +671,7 @@
}, },
{ {
"BriefDescription": "Number of uops executed on port 5", "BriefDescription": "Number of uops executed on port 5",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1", "EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_5", "EventName": "UOPS_DISPATCHED.PORT_5",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 5.", "PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 5.",
...@@ -607,6 +680,7 @@ ...@@ -607,6 +680,7 @@
}, },
{ {
"BriefDescription": "Number of uops executed on port 6", "BriefDescription": "Number of uops executed on port 6",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1", "EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_6", "EventName": "UOPS_DISPATCHED.PORT_6",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 6.", "PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 6.",
...@@ -615,6 +689,7 @@ ...@@ -615,6 +689,7 @@
}, },
{ {
"BriefDescription": "Number of uops executed on port 7 and 8", "BriefDescription": "Number of uops executed on port 7 and 8",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1", "EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_7_8", "EventName": "UOPS_DISPATCHED.PORT_7_8",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 7 and 8.", "PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 7 and 8.",
...@@ -623,6 +698,7 @@ ...@@ -623,6 +698,7 @@
}, },
{ {
"BriefDescription": "Cycles at least 1 micro-op is executed from any thread on physical core.", "BriefDescription": "Cycles at least 1 micro-op is executed from any thread on physical core.",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0xB1", "EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_1", "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_1",
...@@ -632,6 +708,7 @@ ...@@ -632,6 +708,7 @@
}, },
{ {
"BriefDescription": "Cycles at least 2 micro-op is executed from any thread on physical core.", "BriefDescription": "Cycles at least 2 micro-op is executed from any thread on physical core.",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "2", "CounterMask": "2",
"EventCode": "0xB1", "EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_2", "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_2",
...@@ -641,6 +718,7 @@ ...@@ -641,6 +718,7 @@
}, },
{ {
"BriefDescription": "Cycles at least 3 micro-op is executed from any thread on physical core.", "BriefDescription": "Cycles at least 3 micro-op is executed from any thread on physical core.",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "3", "CounterMask": "3",
"EventCode": "0xB1", "EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_3", "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_3",
...@@ -650,6 +728,7 @@ ...@@ -650,6 +728,7 @@
}, },
{ {
"BriefDescription": "Cycles at least 4 micro-op is executed from any thread on physical core.", "BriefDescription": "Cycles at least 4 micro-op is executed from any thread on physical core.",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "4", "CounterMask": "4",
"EventCode": "0xB1", "EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_4", "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_4",
...@@ -659,6 +738,7 @@ ...@@ -659,6 +738,7 @@
}, },
{ {
"BriefDescription": "Cycles where at least 1 uop was executed per-thread", "BriefDescription": "Cycles where at least 1 uop was executed per-thread",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0xb1", "EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_1", "EventName": "UOPS_EXECUTED.CYCLES_GE_1",
...@@ -668,6 +748,7 @@ ...@@ -668,6 +748,7 @@
}, },
{ {
"BriefDescription": "Cycles where at least 2 uops were executed per-thread", "BriefDescription": "Cycles where at least 2 uops were executed per-thread",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "2", "CounterMask": "2",
"EventCode": "0xb1", "EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_2", "EventName": "UOPS_EXECUTED.CYCLES_GE_2",
...@@ -677,6 +758,7 @@ ...@@ -677,6 +758,7 @@
}, },
{ {
"BriefDescription": "Cycles where at least 3 uops were executed per-thread", "BriefDescription": "Cycles where at least 3 uops were executed per-thread",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "3", "CounterMask": "3",
"EventCode": "0xb1", "EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_3", "EventName": "UOPS_EXECUTED.CYCLES_GE_3",
...@@ -686,6 +768,7 @@ ...@@ -686,6 +768,7 @@
}, },
{ {
"BriefDescription": "Cycles where at least 4 uops were executed per-thread", "BriefDescription": "Cycles where at least 4 uops were executed per-thread",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "4", "CounterMask": "4",
"EventCode": "0xb1", "EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_4", "EventName": "UOPS_EXECUTED.CYCLES_GE_4",
...@@ -695,6 +778,7 @@ ...@@ -695,6 +778,7 @@
}, },
{ {
"BriefDescription": "Counts number of cycles no uops were dispatched to be executed on this thread.", "BriefDescription": "Counts number of cycles no uops were dispatched to be executed on this thread.",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0xB1", "EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.STALL_CYCLES", "EventName": "UOPS_EXECUTED.STALL_CYCLES",
...@@ -705,6 +789,7 @@ ...@@ -705,6 +789,7 @@
}, },
{ {
"BriefDescription": "Counts the number of uops to be executed per-thread each cycle.", "BriefDescription": "Counts the number of uops to be executed per-thread each cycle.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb1", "EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.THREAD", "EventName": "UOPS_EXECUTED.THREAD",
"SampleAfterValue": "2000003", "SampleAfterValue": "2000003",
...@@ -712,6 +797,7 @@ ...@@ -712,6 +797,7 @@
}, },
{ {
"BriefDescription": "Counts the number of x87 uops dispatched.", "BriefDescription": "Counts the number of x87 uops dispatched.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xB1", "EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.X87", "EventName": "UOPS_EXECUTED.X87",
"PublicDescription": "Counts the number of x87 uops executed.", "PublicDescription": "Counts the number of x87 uops executed.",
...@@ -720,6 +806,7 @@ ...@@ -720,6 +806,7 @@
}, },
{ {
"BriefDescription": "Uops that RAT issues to RS", "BriefDescription": "Uops that RAT issues to RS",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x0e", "EventCode": "0x0e",
"EventName": "UOPS_ISSUED.ANY", "EventName": "UOPS_ISSUED.ANY",
"PublicDescription": "Counts the number of uops that the Resource Allocation Table (RAT) issues to the Reservation Station (RS).", "PublicDescription": "Counts the number of uops that the Resource Allocation Table (RAT) issues to the Reservation Station (RS).",
...@@ -728,6 +815,7 @@ ...@@ -728,6 +815,7 @@
}, },
{ {
"BriefDescription": "Cycles when RAT does not issue Uops to RS for the thread", "BriefDescription": "Cycles when RAT does not issue Uops to RS for the thread",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0x0E", "EventCode": "0x0E",
"EventName": "UOPS_ISSUED.STALL_CYCLES", "EventName": "UOPS_ISSUED.STALL_CYCLES",
...@@ -738,6 +826,7 @@ ...@@ -738,6 +826,7 @@
}, },
{ {
"BriefDescription": "Uops inserted at issue-stage in order to preserve upper bits of vector registers.", "BriefDescription": "Uops inserted at issue-stage in order to preserve upper bits of vector registers.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x0e", "EventCode": "0x0e",
"EventName": "UOPS_ISSUED.VECTOR_WIDTH_MISMATCH", "EventName": "UOPS_ISSUED.VECTOR_WIDTH_MISMATCH",
"PublicDescription": "Counts the number of Blend Uops issued by the Resource Allocation Table (RAT) to the reservation station (RS) in order to preserve upper bits of vector registers. Starting with the Skylake microarchitecture, these Blend uops are needed since every Intel SSE instruction executed in Dirty Upper State needs to preserve bits 128-255 of the destination register. For more information, refer to 'Mixing Intel AVX and Intel SSE Code' section of the Optimization Guide.", "PublicDescription": "Counts the number of Blend Uops issued by the Resource Allocation Table (RAT) to the reservation station (RS) in order to preserve upper bits of vector registers. Starting with the Skylake microarchitecture, these Blend uops are needed since every Intel SSE instruction executed in Dirty Upper State needs to preserve bits 128-255 of the destination register. For more information, refer to 'Mixing Intel AVX and Intel SSE Code' section of the Optimization Guide.",
...@@ -746,6 +835,7 @@ ...@@ -746,6 +835,7 @@
}, },
{ {
"BriefDescription": "Retirement slots used.", "BriefDescription": "Retirement slots used.",
"Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc2", "EventCode": "0xc2",
"EventName": "UOPS_RETIRED.SLOTS", "EventName": "UOPS_RETIRED.SLOTS",
"PublicDescription": "Counts the retirement slots used each cycle.", "PublicDescription": "Counts the retirement slots used each cycle.",
...@@ -754,6 +844,7 @@ ...@@ -754,6 +844,7 @@
}, },
{ {
"BriefDescription": "Cycles without actually retired uops.", "BriefDescription": "Cycles without actually retired uops.",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0xc2", "EventCode": "0xc2",
"EventName": "UOPS_RETIRED.STALL_CYCLES", "EventName": "UOPS_RETIRED.STALL_CYCLES",
...@@ -764,6 +855,7 @@ ...@@ -764,6 +855,7 @@
}, },
{ {
"BriefDescription": "Cycles with less than 10 actually retired uops.", "BriefDescription": "Cycles with less than 10 actually retired uops.",
"Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "10", "CounterMask": "10",
"EventCode": "0xc2", "EventCode": "0xc2",
"EventName": "UOPS_RETIRED.TOTAL_CYCLES", "EventName": "UOPS_RETIRED.TOTAL_CYCLES",
......
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
[ [
{ {
"BriefDescription": "DRAM Activate Count : All Activates", "BriefDescription": "DRAM Activate Count : All Activates",
"Counter": "0,1,2,3",
"EventCode": "0x01", "EventCode": "0x01",
"EventName": "UNC_M_ACT_COUNT.ALL", "EventName": "UNC_M_ACT_COUNT.ALL",
"PerPkg": "1", "PerPkg": "1",
...@@ -10,8 +11,10 @@ ...@@ -10,8 +11,10 @@
}, },
{ {
"BriefDescription": "DRAM Activate Count : Activate due to Bypass", "BriefDescription": "DRAM Activate Count : Activate due to Bypass",
"Counter": "0,1,2,3",
"EventCode": "0x01", "EventCode": "0x01",
"EventName": "UNC_M_ACT_COUNT.BYP", "EventName": "UNC_M_ACT_COUNT.BYP",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "DRAM Activate Count : Activate due to Bypass : Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.", "PublicDescription": "DRAM Activate Count : Activate due to Bypass : Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
"UMask": "0x8", "UMask": "0x8",
...@@ -19,6 +22,7 @@ ...@@ -19,6 +22,7 @@
}, },
{ {
"BriefDescription": "All DRAM CAS commands issued", "BriefDescription": "All DRAM CAS commands issued",
"Counter": "0,1,2,3",
"EventCode": "0x04", "EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.ALL", "EventName": "UNC_M_CAS_COUNT.ALL",
"PerPkg": "1", "PerPkg": "1",
...@@ -28,6 +32,7 @@ ...@@ -28,6 +32,7 @@
}, },
{ {
"BriefDescription": "All DRAM read CAS commands issued (including underfills)", "BriefDescription": "All DRAM read CAS commands issued (including underfills)",
"Counter": "0,1,2,3",
"EventCode": "0x04", "EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.RD", "EventName": "UNC_M_CAS_COUNT.RD",
"PerPkg": "1", "PerPkg": "1",
...@@ -37,8 +42,10 @@ ...@@ -37,8 +42,10 @@
}, },
{ {
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS commands w/auto-pre", "BriefDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS commands w/auto-pre",
"Counter": "0,1,2,3",
"EventCode": "0x04", "EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.RD_PRE_REG", "EventName": "UNC_M_CAS_COUNT.RD_PRE_REG",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS commands w/auto-pre : DRAM RD_CAS and WR_CAS Commands : Counts the total number or DRAM Read CAS commands issued on this channel. This includes both regular RD CAS commands as well as those with explicit Precharge. AutoPre is only used in systems that are using closed page policy. We do not filter based on major mode, as RD_CAS is not issued during WMM (with the exception of underfills).", "PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS commands w/auto-pre : DRAM RD_CAS and WR_CAS Commands : Counts the total number or DRAM Read CAS commands issued on this channel. This includes both regular RD CAS commands as well as those with explicit Precharge. AutoPre is only used in systems that are using closed page policy. We do not filter based on major mode, as RD_CAS is not issued during WMM (with the exception of underfills).",
"UMask": "0x2", "UMask": "0x2",
...@@ -46,8 +53,10 @@ ...@@ -46,8 +53,10 @@
}, },
{ {
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.", "BriefDescription": "DRAM RD_CAS and WR_CAS Commands.",
"Counter": "0,1,2,3",
"EventCode": "0x04", "EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.RD_PRE_UNDERFILL", "EventName": "UNC_M_CAS_COUNT.RD_PRE_UNDERFILL",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS and WR_CAS Commands", "PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS and WR_CAS Commands",
"UMask": "0x8", "UMask": "0x8",
...@@ -55,8 +64,10 @@ ...@@ -55,8 +64,10 @@
}, },
{ {
"BriefDescription": "All DRAM read CAS commands issued (does not include underfills)", "BriefDescription": "All DRAM read CAS commands issued (does not include underfills)",
"Counter": "0,1,2,3",
"EventCode": "0x04", "EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.RD_REG", "EventName": "UNC_M_CAS_COUNT.RD_REG",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Counts the total number of DRAM Read CAS commands issued on this channel. This includes both regular RD CAS commands as well as those with implicit Precharge. We do not filter based on major mode, as RD_CAS is not issued during WMM (with the exception of underfills).", "PublicDescription": "Counts the total number of DRAM Read CAS commands issued on this channel. This includes both regular RD CAS commands as well as those with implicit Precharge. We do not filter based on major mode, as RD_CAS is not issued during WMM (with the exception of underfills).",
"UMask": "0x1", "UMask": "0x1",
...@@ -64,8 +75,10 @@ ...@@ -64,8 +75,10 @@
}, },
{ {
"BriefDescription": "DRAM underfill read CAS commands issued", "BriefDescription": "DRAM underfill read CAS commands issued",
"Counter": "0,1,2,3",
"EventCode": "0x04", "EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.RD_UNDERFILL", "EventName": "UNC_M_CAS_COUNT.RD_UNDERFILL",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Counts the total of DRAM Read CAS commands issued due to an underfill", "PublicDescription": "Counts the total of DRAM Read CAS commands issued due to an underfill",
"UMask": "0x4", "UMask": "0x4",
...@@ -73,6 +86,7 @@ ...@@ -73,6 +86,7 @@
}, },
{ {
"BriefDescription": "All DRAM write CAS commands issued", "BriefDescription": "All DRAM write CAS commands issued",
"Counter": "0,1,2,3",
"EventCode": "0x04", "EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.WR", "EventName": "UNC_M_CAS_COUNT.WR",
"PerPkg": "1", "PerPkg": "1",
...@@ -82,8 +96,10 @@ ...@@ -82,8 +96,10 @@
}, },
{ {
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/o auto-pre", "BriefDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/o auto-pre",
"Counter": "0,1,2,3",
"EventCode": "0x04", "EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.WR_NONPRE", "EventName": "UNC_M_CAS_COUNT.WR_NONPRE",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/o auto-pre : DRAM RD_CAS and WR_CAS Commands", "PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/o auto-pre : DRAM RD_CAS and WR_CAS Commands",
"UMask": "0x10", "UMask": "0x10",
...@@ -91,8 +107,10 @@ ...@@ -91,8 +107,10 @@
}, },
{ {
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/ auto-pre", "BriefDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/ auto-pre",
"Counter": "0,1,2,3",
"EventCode": "0x04", "EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.WR_PRE", "EventName": "UNC_M_CAS_COUNT.WR_PRE",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/ auto-pre : DRAM RD_CAS and WR_CAS Commands", "PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/ auto-pre : DRAM RD_CAS and WR_CAS Commands",
"UMask": "0x20", "UMask": "0x20",
...@@ -100,28 +118,34 @@ ...@@ -100,28 +118,34 @@
}, },
{ {
"BriefDescription": "DRAM Clockticks", "BriefDescription": "DRAM Clockticks",
"Counter": "0,1,2,3",
"EventName": "UNC_M_CLOCKTICKS", "EventName": "UNC_M_CLOCKTICKS",
"PerPkg": "1", "PerPkg": "1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Free running counter that increments for the Memory Controller", "BriefDescription": "Free running counter that increments for the Memory Controller",
"Counter": "4",
"EventCode": "0xff", "EventCode": "0xff",
"EventName": "UNC_M_CLOCKTICKS_FREERUN", "EventName": "UNC_M_CLOCKTICKS_FREERUN",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x10", "UMask": "0x10",
"Unit": "imc_free_running" "Unit": "imc_free_running"
}, },
{ {
"BriefDescription": "DRAM Precharge All Commands", "BriefDescription": "DRAM Precharge All Commands",
"Counter": "0,1,2,3",
"EventCode": "0x44", "EventCode": "0x44",
"EventName": "UNC_M_DRAM_PRE_ALL", "EventName": "UNC_M_DRAM_PRE_ALL",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "DRAM Precharge All Commands : Counts the number of times that the precharge all command was sent.", "PublicDescription": "DRAM Precharge All Commands : Counts the number of times that the precharge all command was sent.",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Number of DRAM Refreshes Issued", "BriefDescription": "Number of DRAM Refreshes Issued",
"Counter": "0,1,2,3",
"EventCode": "0x45", "EventCode": "0x45",
"EventName": "UNC_M_DRAM_REFRESH.HIGH", "EventName": "UNC_M_DRAM_REFRESH.HIGH",
"PerPkg": "1", "PerPkg": "1",
...@@ -131,6 +155,7 @@ ...@@ -131,6 +155,7 @@
}, },
{ {
"BriefDescription": "Number of DRAM Refreshes Issued", "BriefDescription": "Number of DRAM Refreshes Issued",
"Counter": "0,1,2,3",
"EventCode": "0x45", "EventCode": "0x45",
"EventName": "UNC_M_DRAM_REFRESH.OPPORTUNISTIC", "EventName": "UNC_M_DRAM_REFRESH.OPPORTUNISTIC",
"PerPkg": "1", "PerPkg": "1",
...@@ -140,6 +165,7 @@ ...@@ -140,6 +165,7 @@
}, },
{ {
"BriefDescription": "Number of DRAM Refreshes Issued", "BriefDescription": "Number of DRAM Refreshes Issued",
"Counter": "0,1,2,3",
"EventCode": "0x45", "EventCode": "0x45",
"EventName": "UNC_M_DRAM_REFRESH.PANIC", "EventName": "UNC_M_DRAM_REFRESH.PANIC",
"PerPkg": "1", "PerPkg": "1",
...@@ -149,6 +175,7 @@ ...@@ -149,6 +175,7 @@
}, },
{ {
"BriefDescription": "Half clockticks for IMC", "BriefDescription": "Half clockticks for IMC",
"Counter": "FIXED",
"EventCode": "0xff", "EventCode": "0xff",
"EventName": "UNC_M_HCLOCKTICKS", "EventName": "UNC_M_HCLOCKTICKS",
"PerPkg": "1", "PerPkg": "1",
...@@ -156,37 +183,46 @@ ...@@ -156,37 +183,46 @@
}, },
{ {
"BriefDescription": "UNC_M_PARITY_ERRORS", "BriefDescription": "UNC_M_PARITY_ERRORS",
"Counter": "0,1,2,3",
"EventCode": "0x2c", "EventCode": "0x2c",
"EventName": "UNC_M_PARITY_ERRORS", "EventName": "UNC_M_PARITY_ERRORS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "UNC_M_PCLS.RD", "BriefDescription": "UNC_M_PCLS.RD",
"Counter": "0,1,2,3",
"EventCode": "0xA0", "EventCode": "0xA0",
"EventName": "UNC_M_PCLS.RD", "EventName": "UNC_M_PCLS.RD",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x1", "UMask": "0x1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "UNC_M_PCLS.TOTAL", "BriefDescription": "UNC_M_PCLS.TOTAL",
"Counter": "0,1,2,3",
"EventCode": "0xA0", "EventCode": "0xA0",
"EventName": "UNC_M_PCLS.TOTAL", "EventName": "UNC_M_PCLS.TOTAL",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "UNC_M_PCLS.WR", "BriefDescription": "UNC_M_PCLS.WR",
"Counter": "0,1,2,3",
"EventCode": "0xA0", "EventCode": "0xA0",
"EventName": "UNC_M_PCLS.WR", "EventName": "UNC_M_PCLS.WR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x2", "UMask": "0x2",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Commands : All", "BriefDescription": "PMM Commands : All",
"Counter": "0,1,2,3",
"EventCode": "0xEA", "EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.ALL", "EventName": "UNC_M_PMM_CMD1.ALL",
"PerPkg": "1", "PerPkg": "1",
...@@ -196,22 +232,27 @@ ...@@ -196,22 +232,27 @@
}, },
{ {
"BriefDescription": "PMM Commands : Misc Commands (error, flow ACKs)", "BriefDescription": "PMM Commands : Misc Commands (error, flow ACKs)",
"Counter": "0,1,2,3",
"EventCode": "0xEA", "EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.MISC", "EventName": "UNC_M_PMM_CMD1.MISC",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x80", "UMask": "0x80",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Commands : Misc GNTs", "BriefDescription": "PMM Commands : Misc GNTs",
"Counter": "0,1,2,3",
"EventCode": "0xEA", "EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.MISC_GNT", "EventName": "UNC_M_PMM_CMD1.MISC_GNT",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x40", "UMask": "0x40",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Commands : Reads - RPQ", "BriefDescription": "PMM Commands : Reads - RPQ",
"Counter": "0,1,2,3",
"EventCode": "0xEA", "EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.RD", "EventName": "UNC_M_PMM_CMD1.RD",
"PerPkg": "1", "PerPkg": "1",
...@@ -221,14 +262,17 @@ ...@@ -221,14 +262,17 @@
}, },
{ {
"BriefDescription": "PMM Commands : RPQ GNTs", "BriefDescription": "PMM Commands : RPQ GNTs",
"Counter": "0,1,2,3",
"EventCode": "0xEA", "EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.RPQ_GNTS", "EventName": "UNC_M_PMM_CMD1.RPQ_GNTS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x10", "UMask": "0x10",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Commands : Underfill reads", "BriefDescription": "PMM Commands : Underfill reads",
"Counter": "0,1,2,3",
"EventCode": "0xEA", "EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.UFILL_RD", "EventName": "UNC_M_PMM_CMD1.UFILL_RD",
"PerPkg": "1", "PerPkg": "1",
...@@ -238,14 +282,17 @@ ...@@ -238,14 +282,17 @@
}, },
{ {
"BriefDescription": "PMM Commands : Underfill GNTs", "BriefDescription": "PMM Commands : Underfill GNTs",
"Counter": "0,1,2,3",
"EventCode": "0xEA", "EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.WPQ_GNTS", "EventName": "UNC_M_PMM_CMD1.WPQ_GNTS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x20", "UMask": "0x20",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Commands : Writes", "BriefDescription": "PMM Commands : Writes",
"Counter": "0,1,2,3",
"EventCode": "0xEA", "EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.WR", "EventName": "UNC_M_PMM_CMD1.WR",
"PerPkg": "1", "PerPkg": "1",
...@@ -255,84 +302,105 @@ ...@@ -255,84 +302,105 @@
}, },
{ {
"BriefDescription": "PMM Commands - Part 2 : Expected No data packet (ERID matched NDP encoding)", "BriefDescription": "PMM Commands - Part 2 : Expected No data packet (ERID matched NDP encoding)",
"Counter": "0,1,2,3",
"EventCode": "0xEB", "EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.NODATA_EXP", "EventName": "UNC_M_PMM_CMD2.NODATA_EXP",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x2", "UMask": "0x2",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Commands - Part 2 : Unexpected No data packet (ERID matched a Read, but data was a NDP)", "BriefDescription": "PMM Commands - Part 2 : Unexpected No data packet (ERID matched a Read, but data was a NDP)",
"Counter": "0,1,2,3",
"EventCode": "0xEB", "EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.NODATA_UNEXP", "EventName": "UNC_M_PMM_CMD2.NODATA_UNEXP",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Commands - Part 2 : Opportunistic Reads", "BriefDescription": "PMM Commands - Part 2 : Opportunistic Reads",
"Counter": "0,1,2,3",
"EventCode": "0xEB", "EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.OPP_RD", "EventName": "UNC_M_PMM_CMD2.OPP_RD",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x1", "UMask": "0x1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Commands - Part 2 : ECC Errors", "BriefDescription": "PMM Commands - Part 2 : ECC Errors",
"Counter": "0,1,2,3",
"EventCode": "0xEB", "EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.PMM_ECC_ERROR", "EventName": "UNC_M_PMM_CMD2.PMM_ECC_ERROR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x20", "UMask": "0x20",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Commands - Part 2 : ERID detectable parity error", "BriefDescription": "PMM Commands - Part 2 : ERID detectable parity error",
"Counter": "0,1,2,3",
"EventCode": "0xEB", "EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.PMM_ERID_ERROR", "EventName": "UNC_M_PMM_CMD2.PMM_ERID_ERROR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x40", "UMask": "0x40",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Commands - Part 2", "BriefDescription": "PMM Commands - Part 2",
"Counter": "0,1,2,3",
"EventCode": "0xEB", "EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.PMM_ERID_STARVED", "EventName": "UNC_M_PMM_CMD2.PMM_ERID_STARVED",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x80", "UMask": "0x80",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Commands - Part 2 : Read Requests - Slot 0", "BriefDescription": "PMM Commands - Part 2 : Read Requests - Slot 0",
"Counter": "0,1,2,3",
"EventCode": "0xEB", "EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.REQS_SLOT0", "EventName": "UNC_M_PMM_CMD2.REQS_SLOT0",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x8", "UMask": "0x8",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Commands - Part 2 : Read Requests - Slot 1", "BriefDescription": "PMM Commands - Part 2 : Read Requests - Slot 1",
"Counter": "0,1,2,3",
"EventCode": "0xEB", "EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.REQS_SLOT1", "EventName": "UNC_M_PMM_CMD2.REQS_SLOT1",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x10", "UMask": "0x10",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Read Queue Cycles Full", "BriefDescription": "PMM Read Queue Cycles Full",
"Counter": "0,1,2,3",
"EventCode": "0xE2", "EventCode": "0xE2",
"EventName": "UNC_M_PMM_RPQ_CYCLES_FULL", "EventName": "UNC_M_PMM_RPQ_CYCLES_FULL",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Read Queue Cycles Not Empty", "BriefDescription": "PMM Read Queue Cycles Not Empty",
"Counter": "0,1,2,3",
"EventCode": "0xE1", "EventCode": "0xE1",
"EventName": "UNC_M_PMM_RPQ_CYCLES_NE", "EventName": "UNC_M_PMM_RPQ_CYCLES_NE",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Read Queue Inserts", "BriefDescription": "PMM Read Queue Inserts",
"Counter": "0,1,2,3",
"EventCode": "0xE3", "EventCode": "0xE3",
"EventName": "UNC_M_PMM_RPQ_INSERTS", "EventName": "UNC_M_PMM_RPQ_INSERTS",
"PerPkg": "1", "PerPkg": "1",
...@@ -341,6 +409,7 @@ ...@@ -341,6 +409,7 @@
}, },
{ {
"BriefDescription": "PMM Read Pending Queue Occupancy", "BriefDescription": "PMM Read Pending Queue Occupancy",
"Counter": "0,1,2,3",
"EventCode": "0xE0", "EventCode": "0xE0",
"EventName": "UNC_M_PMM_RPQ_OCCUPANCY.ALL", "EventName": "UNC_M_PMM_RPQ_OCCUPANCY.ALL",
"PerPkg": "1", "PerPkg": "1",
...@@ -350,8 +419,10 @@ ...@@ -350,8 +419,10 @@
}, },
{ {
"BriefDescription": "PMM Read Pending Queue Occupancy", "BriefDescription": "PMM Read Pending Queue Occupancy",
"Counter": "0,1,2,3",
"EventCode": "0xE0", "EventCode": "0xE0",
"EventName": "UNC_M_PMM_RPQ_OCCUPANCY.GNT_WAIT", "EventName": "UNC_M_PMM_RPQ_OCCUPANCY.GNT_WAIT",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "PMM Read Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Read Pending Queue.", "PublicDescription": "PMM Read Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Read Pending Queue.",
"UMask": "0x4", "UMask": "0x4",
...@@ -359,8 +430,10 @@ ...@@ -359,8 +430,10 @@
}, },
{ {
"BriefDescription": "PMM Read Pending Queue Occupancy", "BriefDescription": "PMM Read Pending Queue Occupancy",
"Counter": "0,1,2,3",
"EventCode": "0xE0", "EventCode": "0xE0",
"EventName": "UNC_M_PMM_RPQ_OCCUPANCY.NO_GNT", "EventName": "UNC_M_PMM_RPQ_OCCUPANCY.NO_GNT",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "PMM Read Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Read Pending Queue.", "PublicDescription": "PMM Read Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Read Pending Queue.",
"UMask": "0x2", "UMask": "0x2",
...@@ -368,34 +441,43 @@ ...@@ -368,34 +441,43 @@
}, },
{ {
"BriefDescription": "PMM Write Queue Cycles Full", "BriefDescription": "PMM Write Queue Cycles Full",
"Counter": "0,1,2,3",
"EventCode": "0xE6", "EventCode": "0xE6",
"EventName": "UNC_M_PMM_WPQ_CYCLES_FULL", "EventName": "UNC_M_PMM_WPQ_CYCLES_FULL",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Write Queue Cycles Not Empty", "BriefDescription": "PMM Write Queue Cycles Not Empty",
"Counter": "0,1,2,3",
"EventCode": "0xE5", "EventCode": "0xE5",
"EventName": "UNC_M_PMM_WPQ_CYCLES_NE", "EventName": "UNC_M_PMM_WPQ_CYCLES_NE",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "UNC_M_PMM_WPQ_FLUSH", "BriefDescription": "UNC_M_PMM_WPQ_FLUSH",
"Counter": "0,1,2,3",
"EventCode": "0xe8", "EventCode": "0xe8",
"EventName": "UNC_M_PMM_WPQ_FLUSH", "EventName": "UNC_M_PMM_WPQ_FLUSH",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "UNC_M_PMM_WPQ_FLUSH_CYC", "BriefDescription": "UNC_M_PMM_WPQ_FLUSH_CYC",
"Counter": "0,1,2,3",
"EventCode": "0xe9", "EventCode": "0xe9",
"EventName": "UNC_M_PMM_WPQ_FLUSH_CYC", "EventName": "UNC_M_PMM_WPQ_FLUSH_CYC",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "PMM Write Queue Inserts", "BriefDescription": "PMM Write Queue Inserts",
"Counter": "0,1,2,3",
"EventCode": "0xE7", "EventCode": "0xE7",
"EventName": "UNC_M_PMM_WPQ_INSERTS", "EventName": "UNC_M_PMM_WPQ_INSERTS",
"PerPkg": "1", "PerPkg": "1",
...@@ -404,6 +486,7 @@ ...@@ -404,6 +486,7 @@
}, },
{ {
"BriefDescription": "PMM Write Pending Queue Occupancy", "BriefDescription": "PMM Write Pending Queue Occupancy",
"Counter": "0,1,2,3",
"EventCode": "0xE4", "EventCode": "0xE4",
"EventName": "UNC_M_PMM_WPQ_OCCUPANCY.ALL", "EventName": "UNC_M_PMM_WPQ_OCCUPANCY.ALL",
"PerPkg": "1", "PerPkg": "1",
...@@ -413,8 +496,10 @@ ...@@ -413,8 +496,10 @@
}, },
{ {
"BriefDescription": "PMM Write Pending Queue Occupancy", "BriefDescription": "PMM Write Pending Queue Occupancy",
"Counter": "0,1,2,3",
"EventCode": "0xE4", "EventCode": "0xE4",
"EventName": "UNC_M_PMM_WPQ_OCCUPANCY.CAS", "EventName": "UNC_M_PMM_WPQ_OCCUPANCY.CAS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "PMM Write Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Write Pending Queue.", "PublicDescription": "PMM Write Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Write Pending Queue.",
"UMask": "0x2", "UMask": "0x2",
...@@ -422,8 +507,10 @@ ...@@ -422,8 +507,10 @@
}, },
{ {
"BriefDescription": "PMM Write Pending Queue Occupancy", "BriefDescription": "PMM Write Pending Queue Occupancy",
"Counter": "0,1,2,3",
"EventCode": "0xE4", "EventCode": "0xE4",
"EventName": "UNC_M_PMM_WPQ_OCCUPANCY.PWR", "EventName": "UNC_M_PMM_WPQ_OCCUPANCY.PWR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "PMM Write Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Write Pending Queue.", "PublicDescription": "PMM Write Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Write Pending Queue.",
"UMask": "0x4", "UMask": "0x4",
...@@ -431,16 +518,20 @@ ...@@ -431,16 +518,20 @@
}, },
{ {
"BriefDescription": "Channel PPD Cycles", "BriefDescription": "Channel PPD Cycles",
"Counter": "0,1,2,3",
"EventCode": "0x85", "EventCode": "0x85",
"EventName": "UNC_M_POWER_CHANNEL_PPD", "EventName": "UNC_M_POWER_CHANNEL_PPD",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Channel PPD Cycles : Number of cycles when all the ranks in the channel are in PPD mode. If IBT=off is enabled, then this can be used to count those cycles. If it is not enabled, then this can count the number of cycles when that could have been taken advantage of.", "PublicDescription": "Channel PPD Cycles : Number of cycles when all the ranks in the channel are in PPD mode. If IBT=off is enabled, then this can be used to count those cycles. If it is not enabled, then this can count the number of cycles when that could have been taken advantage of.",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID", "BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID",
"Counter": "0,1,2,3",
"EventCode": "0x47", "EventCode": "0x47",
"EventName": "UNC_M_POWER_CKE_CYCLES.LOW_0", "EventName": "UNC_M_POWER_CKE_CYCLES.LOW_0",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).", "PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x1", "UMask": "0x1",
...@@ -448,8 +539,10 @@ ...@@ -448,8 +539,10 @@
}, },
{ {
"BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID", "BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID",
"Counter": "0,1,2,3",
"EventCode": "0x47", "EventCode": "0x47",
"EventName": "UNC_M_POWER_CKE_CYCLES.LOW_1", "EventName": "UNC_M_POWER_CKE_CYCLES.LOW_1",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).", "PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x2", "UMask": "0x2",
...@@ -457,8 +550,10 @@ ...@@ -457,8 +550,10 @@
}, },
{ {
"BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID", "BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID",
"Counter": "0,1,2,3",
"EventCode": "0x47", "EventCode": "0x47",
"EventName": "UNC_M_POWER_CKE_CYCLES.LOW_2", "EventName": "UNC_M_POWER_CKE_CYCLES.LOW_2",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).", "PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x4", "UMask": "0x4",
...@@ -466,8 +561,10 @@ ...@@ -466,8 +561,10 @@
}, },
{ {
"BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID", "BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID",
"Counter": "0,1,2,3",
"EventCode": "0x47", "EventCode": "0x47",
"EventName": "UNC_M_POWER_CKE_CYCLES.LOW_3", "EventName": "UNC_M_POWER_CKE_CYCLES.LOW_3",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).", "PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x8", "UMask": "0x8",
...@@ -475,8 +572,10 @@ ...@@ -475,8 +572,10 @@
}, },
{ {
"BriefDescription": "Throttle Cycles for Rank 0", "BriefDescription": "Throttle Cycles for Rank 0",
"Counter": "0,1,2,3",
"EventCode": "0x86", "EventCode": "0x86",
"EventName": "UNC_M_POWER_CRIT_THROTTLE_CYCLES.SLOT0", "EventName": "UNC_M_POWER_CRIT_THROTTLE_CYCLES.SLOT0",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. : Thermal throttling is performed per DIMM. We support 3 DIMMs per channel. This ID allows us to filter by ID.", "PublicDescription": "Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. : Thermal throttling is performed per DIMM. We support 3 DIMMs per channel. This ID allows us to filter by ID.",
"UMask": "0x1", "UMask": "0x1",
...@@ -484,8 +583,10 @@ ...@@ -484,8 +583,10 @@
}, },
{ {
"BriefDescription": "Throttle Cycles for Rank 0", "BriefDescription": "Throttle Cycles for Rank 0",
"Counter": "0,1,2,3",
"EventCode": "0x86", "EventCode": "0x86",
"EventName": "UNC_M_POWER_CRIT_THROTTLE_CYCLES.SLOT1", "EventName": "UNC_M_POWER_CRIT_THROTTLE_CYCLES.SLOT1",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.", "PublicDescription": "Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"UMask": "0x2", "UMask": "0x2",
...@@ -493,16 +594,20 @@ ...@@ -493,16 +594,20 @@
}, },
{ {
"BriefDescription": "Clock-Enabled Self-Refresh", "BriefDescription": "Clock-Enabled Self-Refresh",
"Counter": "0,1,2,3",
"EventCode": "0x43", "EventCode": "0x43",
"EventName": "UNC_M_POWER_SELF_REFRESH", "EventName": "UNC_M_POWER_SELF_REFRESH",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Clock-Enabled Self-Refresh : Counts the number of cycles when the iMC is in self-refresh and the iMC still has a clock. This happens in some package C-states. For example, the PCU may ask the iMC to enter self-refresh even though some of the cores are still processing. One use of this is for Monroe technology. Self-refresh is required during package C3 and C6, but there is no clock in the iMC at this time, so it is not possible to count these cases.", "PublicDescription": "Clock-Enabled Self-Refresh : Counts the number of cycles when the iMC is in self-refresh and the iMC still has a clock. This happens in some package C-states. For example, the PCU may ask the iMC to enter self-refresh even though some of the cores are still processing. One use of this is for Monroe technology. Self-refresh is required during package C3 and C6, but there is no clock in the iMC at this time, so it is not possible to count these cases.",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Throttle Cycles for Rank 0", "BriefDescription": "Throttle Cycles for Rank 0",
"Counter": "0,1,2,3",
"EventCode": "0x46", "EventCode": "0x46",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.SLOT0", "EventName": "UNC_M_POWER_THROTTLE_CYCLES.SLOT0",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. : Thermal throttling is performed per DIMM. We support 3 DIMMs per channel. This ID allows us to filter by ID.", "PublicDescription": "Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. : Thermal throttling is performed per DIMM. We support 3 DIMMs per channel. This ID allows us to filter by ID.",
"UMask": "0x1", "UMask": "0x1",
...@@ -510,8 +615,10 @@ ...@@ -510,8 +615,10 @@
}, },
{ {
"BriefDescription": "Throttle Cycles for Rank 0", "BriefDescription": "Throttle Cycles for Rank 0",
"Counter": "0,1,2,3",
"EventCode": "0x46", "EventCode": "0x46",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.SLOT1", "EventName": "UNC_M_POWER_THROTTLE_CYCLES.SLOT1",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.", "PublicDescription": "Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"UMask": "0x2", "UMask": "0x2",
...@@ -519,6 +626,7 @@ ...@@ -519,6 +626,7 @@
}, },
{ {
"BriefDescription": "DRAM Precharge commands.", "BriefDescription": "DRAM Precharge commands.",
"Counter": "0,1,2,3",
"EventCode": "0x02", "EventCode": "0x02",
"EventName": "UNC_M_PRE_COUNT.ALL", "EventName": "UNC_M_PRE_COUNT.ALL",
"PerPkg": "1", "PerPkg": "1",
...@@ -528,8 +636,10 @@ ...@@ -528,8 +636,10 @@
}, },
{ {
"BriefDescription": "DRAM Precharge commands. : Precharge due to page miss", "BriefDescription": "DRAM Precharge commands. : Precharge due to page miss",
"Counter": "0,1,2,3",
"EventCode": "0x02", "EventCode": "0x02",
"EventName": "UNC_M_PRE_COUNT.PAGE_MISS", "EventName": "UNC_M_PRE_COUNT.PAGE_MISS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "DRAM Precharge commands. : Precharge due to page miss : Counts the number of DRAM Precharge commands sent on this channel. : Pages Misses are due to precharges from bank scheduler (rd/wr requests)", "PublicDescription": "DRAM Precharge commands. : Precharge due to page miss : Counts the number of DRAM Precharge commands sent on this channel. : Pages Misses are due to precharges from bank scheduler (rd/wr requests)",
"UMask": "0xc", "UMask": "0xc",
...@@ -537,6 +647,7 @@ ...@@ -537,6 +647,7 @@
}, },
{ {
"BriefDescription": "DRAM Precharge commands. : Precharge due to page table", "BriefDescription": "DRAM Precharge commands. : Precharge due to page table",
"Counter": "0,1,2,3",
"EventCode": "0x02", "EventCode": "0x02",
"EventName": "UNC_M_PRE_COUNT.PGT", "EventName": "UNC_M_PRE_COUNT.PGT",
"PerPkg": "1", "PerPkg": "1",
...@@ -546,6 +657,7 @@ ...@@ -546,6 +657,7 @@
}, },
{ {
"BriefDescription": "DRAM Precharge commands. : Precharge due to read", "BriefDescription": "DRAM Precharge commands. : Precharge due to read",
"Counter": "0,1,2,3",
"EventCode": "0x02", "EventCode": "0x02",
"EventName": "UNC_M_PRE_COUNT.RD", "EventName": "UNC_M_PRE_COUNT.RD",
"PerPkg": "1", "PerPkg": "1",
...@@ -555,6 +667,7 @@ ...@@ -555,6 +667,7 @@
}, },
{ {
"BriefDescription": "DRAM Precharge commands. : Precharge due to write", "BriefDescription": "DRAM Precharge commands. : Precharge due to write",
"Counter": "0,1,2,3",
"EventCode": "0x02", "EventCode": "0x02",
"EventName": "UNC_M_PRE_COUNT.WR", "EventName": "UNC_M_PRE_COUNT.WR",
"PerPkg": "1", "PerPkg": "1",
...@@ -564,52 +677,66 @@ ...@@ -564,52 +677,66 @@
}, },
{ {
"BriefDescription": "Read Data Buffer Full", "BriefDescription": "Read Data Buffer Full",
"Counter": "0,1,2,3",
"EventCode": "0x19", "EventCode": "0x19",
"EventName": "UNC_M_RDB_FULL", "EventName": "UNC_M_RDB_FULL",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Read Data Buffer Inserts", "BriefDescription": "Read Data Buffer Inserts",
"Counter": "0,1,2,3",
"EventCode": "0x17", "EventCode": "0x17",
"EventName": "UNC_M_RDB_INSERTS", "EventName": "UNC_M_RDB_INSERTS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Read Data Buffer Not Empty", "BriefDescription": "Read Data Buffer Not Empty",
"Counter": "0,1,2,3",
"EventCode": "0x18", "EventCode": "0x18",
"EventName": "UNC_M_RDB_NOT_EMPTY", "EventName": "UNC_M_RDB_NOT_EMPTY",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Read Data Buffer Occupancy", "BriefDescription": "Read Data Buffer Occupancy",
"Counter": "0,1,2,3",
"EventCode": "0x1A", "EventCode": "0x1A",
"EventName": "UNC_M_RDB_OCCUPANCY", "EventName": "UNC_M_RDB_OCCUPANCY",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Read Pending Queue Full Cycles", "BriefDescription": "Read Pending Queue Full Cycles",
"Counter": "0,1,2,3",
"EventCode": "0x12", "EventCode": "0x12",
"EventName": "UNC_M_RPQ_CYCLES_FULL_PCH0", "EventName": "UNC_M_RPQ_CYCLES_FULL_PCH0",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Read Pending Queue Full Cycles : Counts the number of cycles when the Read Pending Queue is full. When the RPQ is full, the HA will not be able to issue any additional read requests into the iMC. This count should be similar count in the HA which tracks the number of cycles that the HA has no RPQ credits, just somewhat smaller to account for the credit return overhead. We generally do not expect to see RPQ become full except for potentially during Write Major Mode or while running with slow DRAM. This event only tracks non-ISOC queue entries.", "PublicDescription": "Read Pending Queue Full Cycles : Counts the number of cycles when the Read Pending Queue is full. When the RPQ is full, the HA will not be able to issue any additional read requests into the iMC. This count should be similar count in the HA which tracks the number of cycles that the HA has no RPQ credits, just somewhat smaller to account for the credit return overhead. We generally do not expect to see RPQ become full except for potentially during Write Major Mode or while running with slow DRAM. This event only tracks non-ISOC queue entries.",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Read Pending Queue Full Cycles", "BriefDescription": "Read Pending Queue Full Cycles",
"Counter": "0,1,2,3",
"EventCode": "0x15", "EventCode": "0x15",
"EventName": "UNC_M_RPQ_CYCLES_FULL_PCH1", "EventName": "UNC_M_RPQ_CYCLES_FULL_PCH1",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Read Pending Queue Full Cycles : Counts the number of cycles when the Read Pending Queue is full. When the RPQ is full, the HA will not be able to issue any additional read requests into the iMC. This count should be similar count in the HA which tracks the number of cycles that the HA has no RPQ credits, just somewhat smaller to account for the credit return overhead. We generally do not expect to see RPQ become full except for potentially during Write Major Mode or while running with slow DRAM. This event only tracks non-ISOC queue entries.", "PublicDescription": "Read Pending Queue Full Cycles : Counts the number of cycles when the Read Pending Queue is full. When the RPQ is full, the HA will not be able to issue any additional read requests into the iMC. This count should be similar count in the HA which tracks the number of cycles that the HA has no RPQ credits, just somewhat smaller to account for the credit return overhead. We generally do not expect to see RPQ become full except for potentially during Write Major Mode or while running with slow DRAM. This event only tracks non-ISOC queue entries.",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Read Pending Queue Not Empty", "BriefDescription": "Read Pending Queue Not Empty",
"Counter": "0,1,2,3",
"EventCode": "0x11", "EventCode": "0x11",
"EventName": "UNC_M_RPQ_CYCLES_NE.PCH0", "EventName": "UNC_M_RPQ_CYCLES_NE.PCH0",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Read Pending Queue Not Empty : Counts the number of cycles that the Read Pending Queue is not empty. This can then be used to calculate the average occupancy (in conjunction with the Read Pending Queue Occupancy count). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This filter is to be used in conjunction with the occupancy filter so that one can correctly track the average occupancies for schedulable entries and scheduled requests.", "PublicDescription": "Read Pending Queue Not Empty : Counts the number of cycles that the Read Pending Queue is not empty. This can then be used to calculate the average occupancy (in conjunction with the Read Pending Queue Occupancy count). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This filter is to be used in conjunction with the occupancy filter so that one can correctly track the average occupancies for schedulable entries and scheduled requests.",
"UMask": "0x1", "UMask": "0x1",
...@@ -617,8 +744,10 @@ ...@@ -617,8 +744,10 @@
}, },
{ {
"BriefDescription": "Read Pending Queue Not Empty", "BriefDescription": "Read Pending Queue Not Empty",
"Counter": "0,1,2,3",
"EventCode": "0x11", "EventCode": "0x11",
"EventName": "UNC_M_RPQ_CYCLES_NE.PCH1", "EventName": "UNC_M_RPQ_CYCLES_NE.PCH1",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Read Pending Queue Not Empty : Counts the number of cycles that the Read Pending Queue is not empty. This can then be used to calculate the average occupancy (in conjunction with the Read Pending Queue Occupancy count). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This filter is to be used in conjunction with the occupancy filter so that one can correctly track the average occupancies for schedulable entries and scheduled requests.", "PublicDescription": "Read Pending Queue Not Empty : Counts the number of cycles that the Read Pending Queue is not empty. This can then be used to calculate the average occupancy (in conjunction with the Read Pending Queue Occupancy count). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This filter is to be used in conjunction with the occupancy filter so that one can correctly track the average occupancies for schedulable entries and scheduled requests.",
"UMask": "0x2", "UMask": "0x2",
...@@ -626,6 +755,7 @@ ...@@ -626,6 +755,7 @@
}, },
{ {
"BriefDescription": "Read Pending Queue Allocations", "BriefDescription": "Read Pending Queue Allocations",
"Counter": "0,1,2,3",
"EventCode": "0x10", "EventCode": "0x10",
"EventName": "UNC_M_RPQ_INSERTS.PCH0", "EventName": "UNC_M_RPQ_INSERTS.PCH0",
"PerPkg": "1", "PerPkg": "1",
...@@ -635,6 +765,7 @@ ...@@ -635,6 +765,7 @@
}, },
{ {
"BriefDescription": "Read Pending Queue Allocations", "BriefDescription": "Read Pending Queue Allocations",
"Counter": "0,1,2,3",
"EventCode": "0x10", "EventCode": "0x10",
"EventName": "UNC_M_RPQ_INSERTS.PCH1", "EventName": "UNC_M_RPQ_INSERTS.PCH1",
"PerPkg": "1", "PerPkg": "1",
...@@ -644,6 +775,7 @@ ...@@ -644,6 +775,7 @@
}, },
{ {
"BriefDescription": "Read Pending Queue Occupancy", "BriefDescription": "Read Pending Queue Occupancy",
"Counter": "0,1,2,3",
"EventCode": "0x80", "EventCode": "0x80",
"EventName": "UNC_M_RPQ_OCCUPANCY_PCH0", "EventName": "UNC_M_RPQ_OCCUPANCY_PCH0",
"PerPkg": "1", "PerPkg": "1",
...@@ -652,6 +784,7 @@ ...@@ -652,6 +784,7 @@
}, },
{ {
"BriefDescription": "Read Pending Queue Occupancy", "BriefDescription": "Read Pending Queue Occupancy",
"Counter": "0,1,2,3",
"EventCode": "0x81", "EventCode": "0x81",
"EventName": "UNC_M_RPQ_OCCUPANCY_PCH1", "EventName": "UNC_M_RPQ_OCCUPANCY_PCH1",
"PerPkg": "1", "PerPkg": "1",
...@@ -660,749 +793,930 @@ ...@@ -660,749 +793,930 @@
}, },
{ {
"BriefDescription": "Scoreboard Accesses : Scoreboard Accesses Accepted", "BriefDescription": "Scoreboard Accesses : Scoreboard Accesses Accepted",
"Counter": "0,1,2,3",
"EventCode": "0xD2", "EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.ACCEPTS", "EventName": "UNC_M_SB_ACCESSES.ACCEPTS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x5", "UMask": "0x5",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated.", "BriefDescription": "This event is deprecated.",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd2", "EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.FMRD_CMPS", "EventName": "UNC_M_SB_ACCESSES.FMRD_CMPS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x40", "UMask": "0x40",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated.", "BriefDescription": "This event is deprecated.",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd2", "EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.FMWR_CMPS", "EventName": "UNC_M_SB_ACCESSES.FMWR_CMPS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x80", "UMask": "0x80",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Accesses : Write Accepts", "BriefDescription": "Scoreboard Accesses : Write Accepts",
"Counter": "0,1,2,3",
"EventCode": "0xD2", "EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.FM_RD_CMPS", "EventName": "UNC_M_SB_ACCESSES.FM_RD_CMPS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x40", "UMask": "0x40",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Accesses : Write Rejects", "BriefDescription": "Scoreboard Accesses : Write Rejects",
"Counter": "0,1,2,3",
"EventCode": "0xD2", "EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.FM_WR_CMPS", "EventName": "UNC_M_SB_ACCESSES.FM_WR_CMPS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x80", "UMask": "0x80",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated.", "BriefDescription": "This event is deprecated.",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd2", "EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.NMRD_CMPS", "EventName": "UNC_M_SB_ACCESSES.NMRD_CMPS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x10", "UMask": "0x10",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated.", "BriefDescription": "This event is deprecated.",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd2", "EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.NMWR_CMPS", "EventName": "UNC_M_SB_ACCESSES.NMWR_CMPS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x20", "UMask": "0x20",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Accesses : FM read completions", "BriefDescription": "Scoreboard Accesses : FM read completions",
"Counter": "0,1,2,3",
"EventCode": "0xD2", "EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.NM_RD_CMPS", "EventName": "UNC_M_SB_ACCESSES.NM_RD_CMPS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x10", "UMask": "0x10",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Accesses : FM write completions", "BriefDescription": "Scoreboard Accesses : FM write completions",
"Counter": "0,1,2,3",
"EventCode": "0xD2", "EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.NM_WR_CMPS", "EventName": "UNC_M_SB_ACCESSES.NM_WR_CMPS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x20", "UMask": "0x20",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Accesses : Read Accepts", "BriefDescription": "Scoreboard Accesses : Read Accepts",
"Counter": "0,1,2,3",
"EventCode": "0xD2", "EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.RD_ACCEPTS", "EventName": "UNC_M_SB_ACCESSES.RD_ACCEPTS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x1", "UMask": "0x1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Accesses : Read Rejects", "BriefDescription": "Scoreboard Accesses : Read Rejects",
"Counter": "0,1,2,3",
"EventCode": "0xD2", "EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.RD_REJECTS", "EventName": "UNC_M_SB_ACCESSES.RD_REJECTS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x2", "UMask": "0x2",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Accesses : Scoreboard Accesses Rejected", "BriefDescription": "Scoreboard Accesses : Scoreboard Accesses Rejected",
"Counter": "0,1,2,3",
"EventCode": "0xD2", "EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.REJECTS", "EventName": "UNC_M_SB_ACCESSES.REJECTS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0xa", "UMask": "0xa",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Accesses : NM read completions", "BriefDescription": "Scoreboard Accesses : NM read completions",
"Counter": "0,1,2,3",
"EventCode": "0xD2", "EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.WR_ACCEPTS", "EventName": "UNC_M_SB_ACCESSES.WR_ACCEPTS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Accesses : NM write completions", "BriefDescription": "Scoreboard Accesses : NM write completions",
"Counter": "0,1,2,3",
"EventCode": "0xD2", "EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.WR_REJECTS", "EventName": "UNC_M_SB_ACCESSES.WR_REJECTS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x8", "UMask": "0x8",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Alloc", "BriefDescription": ": Alloc",
"Counter": "0,1,2,3",
"EventCode": "0xD9", "EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.ALLOC", "EventName": "UNC_M_SB_CANARY.ALLOC",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x1", "UMask": "0x1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Dealloc", "BriefDescription": ": Dealloc",
"Counter": "0,1,2,3",
"EventCode": "0xD9", "EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.DEALLOC", "EventName": "UNC_M_SB_CANARY.DEALLOC",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x2", "UMask": "0x2",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_CANARY.FM_RD_STARVED", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_CANARY.FM_RD_STARVED",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd9", "EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.FMRD_STARVED", "EventName": "UNC_M_SB_CANARY.FMRD_STARVED",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x20", "UMask": "0x20",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_CANARY.FM_TGR_WR_STARVED", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_CANARY.FM_TGR_WR_STARVED",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd9", "EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.FMTGRWR_STARVED", "EventName": "UNC_M_SB_CANARY.FMTGRWR_STARVED",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x80", "UMask": "0x80",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_CANARY.FM_WR_STARVED", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_CANARY.FM_WR_STARVED",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd9", "EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.FMWR_STARVED", "EventName": "UNC_M_SB_CANARY.FMWR_STARVED",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x40", "UMask": "0x40",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Near Mem Write Starved", "BriefDescription": ": Near Mem Write Starved",
"Counter": "0,1,2,3",
"EventCode": "0xD9", "EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.FM_RD_STARVED", "EventName": "UNC_M_SB_CANARY.FM_RD_STARVED",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x20", "UMask": "0x20",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Far Mem Write Starved", "BriefDescription": ": Far Mem Write Starved",
"Counter": "0,1,2,3",
"EventCode": "0xD9", "EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.FM_TGR_WR_STARVED", "EventName": "UNC_M_SB_CANARY.FM_TGR_WR_STARVED",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x80", "UMask": "0x80",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Far Mem Read Starved", "BriefDescription": ": Far Mem Read Starved",
"Counter": "0,1,2,3",
"EventCode": "0xD9", "EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.FM_WR_STARVED", "EventName": "UNC_M_SB_CANARY.FM_WR_STARVED",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x40", "UMask": "0x40",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_CANARY.NM_RD_STARVED", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_CANARY.NM_RD_STARVED",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd9", "EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.NMRD_STARVED", "EventName": "UNC_M_SB_CANARY.NMRD_STARVED",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x8", "UMask": "0x8",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_CANARY.NM_WR_STARVED", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_CANARY.NM_WR_STARVED",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd9", "EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.NMWR_STARVED", "EventName": "UNC_M_SB_CANARY.NMWR_STARVED",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x10", "UMask": "0x10",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Valid", "BriefDescription": ": Valid",
"Counter": "0,1,2,3",
"EventCode": "0xD9", "EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.NM_RD_STARVED", "EventName": "UNC_M_SB_CANARY.NM_RD_STARVED",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x8", "UMask": "0x8",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Near Mem Read Starved", "BriefDescription": ": Near Mem Read Starved",
"Counter": "0,1,2,3",
"EventCode": "0xD9", "EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.NM_WR_STARVED", "EventName": "UNC_M_SB_CANARY.NM_WR_STARVED",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x10", "UMask": "0x10",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Reject", "BriefDescription": ": Reject",
"Counter": "0,1,2,3",
"EventCode": "0xD9", "EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.VLD", "EventName": "UNC_M_SB_CANARY.VLD",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Cycles Full", "BriefDescription": "Scoreboard Cycles Full",
"Counter": "0,1,2,3",
"EventCode": "0xD1", "EventCode": "0xD1",
"EventName": "UNC_M_SB_CYCLES_FULL", "EventName": "UNC_M_SB_CYCLES_FULL",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Cycles Not-Empty", "BriefDescription": "Scoreboard Cycles Not-Empty",
"Counter": "0,1,2,3",
"EventCode": "0xD0", "EventCode": "0xD0",
"EventName": "UNC_M_SB_CYCLES_NE", "EventName": "UNC_M_SB_CYCLES_NE",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Inserts : Block region reads", "BriefDescription": "Scoreboard Inserts : Block region reads",
"Counter": "0,1,2,3",
"EventCode": "0xD6", "EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.BLOCK_RDS", "EventName": "UNC_M_SB_INSERTS.BLOCK_RDS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x10", "UMask": "0x10",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Inserts : Block region writes", "BriefDescription": "Scoreboard Inserts : Block region writes",
"Counter": "0,1,2,3",
"EventCode": "0xD6", "EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.BLOCK_WRS", "EventName": "UNC_M_SB_INSERTS.BLOCK_WRS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x20", "UMask": "0x20",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Inserts : Persistent Mem reads", "BriefDescription": "Scoreboard Inserts : Persistent Mem reads",
"Counter": "0,1,2,3",
"EventCode": "0xD6", "EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.PMM_RDS", "EventName": "UNC_M_SB_INSERTS.PMM_RDS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Inserts : Persistent Mem writes", "BriefDescription": "Scoreboard Inserts : Persistent Mem writes",
"Counter": "0,1,2,3",
"EventCode": "0xD6", "EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.PMM_WRS", "EventName": "UNC_M_SB_INSERTS.PMM_WRS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x8", "UMask": "0x8",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Inserts : Reads", "BriefDescription": "Scoreboard Inserts : Reads",
"Counter": "0,1,2,3",
"EventCode": "0xD6", "EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.RDS", "EventName": "UNC_M_SB_INSERTS.RDS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x1", "UMask": "0x1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Inserts : Writes", "BriefDescription": "Scoreboard Inserts : Writes",
"Counter": "0,1,2,3",
"EventCode": "0xD6", "EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.WRS", "EventName": "UNC_M_SB_INSERTS.WRS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x2", "UMask": "0x2",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Occupancy : Block region reads", "BriefDescription": "Scoreboard Occupancy : Block region reads",
"Counter": "0,1,2,3",
"EventCode": "0xD5", "EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.BLOCK_RDS", "EventName": "UNC_M_SB_OCCUPANCY.BLOCK_RDS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x20", "UMask": "0x20",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Occupancy : Block region writes", "BriefDescription": "Scoreboard Occupancy : Block region writes",
"Counter": "0,1,2,3",
"EventCode": "0xD5", "EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.BLOCK_WRS", "EventName": "UNC_M_SB_OCCUPANCY.BLOCK_WRS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x40", "UMask": "0x40",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Occupancy : Persistent Mem reads", "BriefDescription": "Scoreboard Occupancy : Persistent Mem reads",
"Counter": "0,1,2,3",
"EventCode": "0xD5", "EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.PMM_RDS", "EventName": "UNC_M_SB_OCCUPANCY.PMM_RDS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Occupancy : Persistent Mem writes", "BriefDescription": "Scoreboard Occupancy : Persistent Mem writes",
"Counter": "0,1,2,3",
"EventCode": "0xD5", "EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.PMM_WRS", "EventName": "UNC_M_SB_OCCUPANCY.PMM_WRS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x8", "UMask": "0x8",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Occupancy : Reads", "BriefDescription": "Scoreboard Occupancy : Reads",
"Counter": "0,1,2,3",
"EventCode": "0xD5", "EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.RDS", "EventName": "UNC_M_SB_OCCUPANCY.RDS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x1", "UMask": "0x1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Prefetch Inserts : All", "BriefDescription": "Scoreboard Prefetch Inserts : All",
"Counter": "0,1,2,3",
"EventCode": "0xDA", "EventCode": "0xDA",
"EventName": "UNC_M_SB_PREF_INSERTS.ALL", "EventName": "UNC_M_SB_PREF_INSERTS.ALL",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x1", "UMask": "0x1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Prefetch Inserts : DDR4", "BriefDescription": "Scoreboard Prefetch Inserts : DDR4",
"Counter": "0,1,2,3",
"EventCode": "0xDA", "EventCode": "0xDA",
"EventName": "UNC_M_SB_PREF_INSERTS.DDR", "EventName": "UNC_M_SB_PREF_INSERTS.DDR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x2", "UMask": "0x2",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Prefetch Inserts : Persistent Mem", "BriefDescription": "Scoreboard Prefetch Inserts : Persistent Mem",
"Counter": "0,1,2,3",
"EventCode": "0xDA", "EventCode": "0xDA",
"EventName": "UNC_M_SB_PREF_INSERTS.PMM", "EventName": "UNC_M_SB_PREF_INSERTS.PMM",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Prefetch Occupancy : All", "BriefDescription": "Scoreboard Prefetch Occupancy : All",
"Counter": "0,1,2,3",
"EventCode": "0xDB", "EventCode": "0xDB",
"EventName": "UNC_M_SB_PREF_OCCUPANCY.ALL", "EventName": "UNC_M_SB_PREF_OCCUPANCY.ALL",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x1", "UMask": "0x1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Prefetch Occupancy : DDR4", "BriefDescription": "Scoreboard Prefetch Occupancy : DDR4",
"Counter": "0,1,2,3",
"EventCode": "0xDB", "EventCode": "0xDB",
"EventName": "UNC_M_SB_PREF_OCCUPANCY.DDR", "EventName": "UNC_M_SB_PREF_OCCUPANCY.DDR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x2", "UMask": "0x2",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_PREF_OCCUPANCY.PMM", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_PREF_OCCUPANCY.PMM",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xdb", "EventCode": "0xdb",
"EventName": "UNC_M_SB_PREF_OCCUPANCY.PMEM", "EventName": "UNC_M_SB_PREF_OCCUPANCY.PMEM",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Scoreboard Prefetch Occupancy : Persistent Mem", "BriefDescription": "Scoreboard Prefetch Occupancy : Persistent Mem",
"Counter": "0,1,2,3",
"EventCode": "0xdb", "EventCode": "0xdb",
"EventName": "UNC_M_SB_PREF_OCCUPANCY.PMM", "EventName": "UNC_M_SB_PREF_OCCUPANCY.PMM",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Number of Scoreboard Requests Rejected", "BriefDescription": "Number of Scoreboard Requests Rejected",
"Counter": "0,1,2,3",
"EventCode": "0xD4", "EventCode": "0xD4",
"EventName": "UNC_M_SB_REJECT.CANARY", "EventName": "UNC_M_SB_REJECT.CANARY",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x8", "UMask": "0x8",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Number of Scoreboard Requests Rejected", "BriefDescription": "Number of Scoreboard Requests Rejected",
"Counter": "0,1,2,3",
"EventCode": "0xD4", "EventCode": "0xD4",
"EventName": "UNC_M_SB_REJECT.DDR_EARLY_CMP", "EventName": "UNC_M_SB_REJECT.DDR_EARLY_CMP",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x20", "UMask": "0x20",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Number of Scoreboard Requests Rejected : FM requests rejected due to full address conflict", "BriefDescription": "Number of Scoreboard Requests Rejected : FM requests rejected due to full address conflict",
"Counter": "0,1,2,3",
"EventCode": "0xD4", "EventCode": "0xD4",
"EventName": "UNC_M_SB_REJECT.FM_ADDR_CNFLT", "EventName": "UNC_M_SB_REJECT.FM_ADDR_CNFLT",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x2", "UMask": "0x2",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Number of Scoreboard Requests Rejected : NM requests rejected due to set conflict", "BriefDescription": "Number of Scoreboard Requests Rejected : NM requests rejected due to set conflict",
"Counter": "0,1,2,3",
"EventCode": "0xD4", "EventCode": "0xD4",
"EventName": "UNC_M_SB_REJECT.NM_SET_CNFLT", "EventName": "UNC_M_SB_REJECT.NM_SET_CNFLT",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x1", "UMask": "0x1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Number of Scoreboard Requests Rejected : Patrol requests rejected due to set conflict", "BriefDescription": "Number of Scoreboard Requests Rejected : Patrol requests rejected due to set conflict",
"Counter": "0,1,2,3",
"EventCode": "0xD4", "EventCode": "0xD4",
"EventName": "UNC_M_SB_REJECT.PATROL_SET_CNFLT", "EventName": "UNC_M_SB_REJECT.PATROL_SET_CNFLT",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.FM_RD", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.FM_RD",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd7", "EventCode": "0xd7",
"EventName": "UNC_M_SB_STRV_ALLOC.FMRD", "EventName": "UNC_M_SB_STRV_ALLOC.FMRD",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x2", "UMask": "0x2",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.FM_TGR", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.FM_TGR",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd7", "EventCode": "0xd7",
"EventName": "UNC_M_SB_STRV_ALLOC.FMTGR", "EventName": "UNC_M_SB_STRV_ALLOC.FMTGR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x10", "UMask": "0x10",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.FM_WR", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.FM_WR",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd7", "EventCode": "0xd7",
"EventName": "UNC_M_SB_STRV_ALLOC.FMWR", "EventName": "UNC_M_SB_STRV_ALLOC.FMWR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x8", "UMask": "0x8",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Far Mem Read - Set", "BriefDescription": ": Far Mem Read - Set",
"Counter": "0,1,2,3",
"EventCode": "0xD7", "EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.FM_RD", "EventName": "UNC_M_SB_STRV_ALLOC.FM_RD",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x2", "UMask": "0x2",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Near Mem Read - Clear", "BriefDescription": ": Near Mem Read - Clear",
"Counter": "0,1,2,3",
"EventCode": "0xD7", "EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.FM_TGR", "EventName": "UNC_M_SB_STRV_ALLOC.FM_TGR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x10", "UMask": "0x10",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Far Mem Write - Set", "BriefDescription": ": Far Mem Write - Set",
"Counter": "0,1,2,3",
"EventCode": "0xD7", "EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.FM_WR", "EventName": "UNC_M_SB_STRV_ALLOC.FM_WR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x8", "UMask": "0x8",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.NM_RD", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.NM_RD",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd7", "EventCode": "0xd7",
"EventName": "UNC_M_SB_STRV_ALLOC.NMRD", "EventName": "UNC_M_SB_STRV_ALLOC.NMRD",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x1", "UMask": "0x1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.NM_WR", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.NM_WR",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd7", "EventCode": "0xd7",
"EventName": "UNC_M_SB_STRV_ALLOC.NMWR", "EventName": "UNC_M_SB_STRV_ALLOC.NMWR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Near Mem Read - Set", "BriefDescription": ": Near Mem Read - Set",
"Counter": "0,1,2,3",
"EventCode": "0xD7", "EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.NM_RD", "EventName": "UNC_M_SB_STRV_ALLOC.NM_RD",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x1", "UMask": "0x1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Near Mem Write - Set", "BriefDescription": ": Near Mem Write - Set",
"Counter": "0,1,2,3",
"EventCode": "0xD7", "EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.NM_WR", "EventName": "UNC_M_SB_STRV_ALLOC.NM_WR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.FM_RD", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.FM_RD",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xde", "EventCode": "0xde",
"EventName": "UNC_M_SB_STRV_DEALLOC.FMRD", "EventName": "UNC_M_SB_STRV_DEALLOC.FMRD",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x2", "UMask": "0x2",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.FM_TGR", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.FM_TGR",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xde", "EventCode": "0xde",
"EventName": "UNC_M_SB_STRV_DEALLOC.FMTGR", "EventName": "UNC_M_SB_STRV_DEALLOC.FMTGR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x10", "UMask": "0x10",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.FM_WR", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.FM_WR",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xde", "EventCode": "0xde",
"EventName": "UNC_M_SB_STRV_DEALLOC.FMWR", "EventName": "UNC_M_SB_STRV_DEALLOC.FMWR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x8", "UMask": "0x8",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Far Mem Read - Set", "BriefDescription": ": Far Mem Read - Set",
"Counter": "0,1,2,3",
"EventCode": "0xDE", "EventCode": "0xDE",
"EventName": "UNC_M_SB_STRV_DEALLOC.FM_RD", "EventName": "UNC_M_SB_STRV_DEALLOC.FM_RD",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x2", "UMask": "0x2",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Near Mem Read - Clear", "BriefDescription": ": Near Mem Read - Clear",
"Counter": "0,1,2,3",
"EventCode": "0xDE", "EventCode": "0xDE",
"EventName": "UNC_M_SB_STRV_DEALLOC.FM_TGR", "EventName": "UNC_M_SB_STRV_DEALLOC.FM_TGR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x10", "UMask": "0x10",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Far Mem Write - Set", "BriefDescription": ": Far Mem Write - Set",
"Counter": "0,1,2,3",
"EventCode": "0xDE", "EventCode": "0xDE",
"EventName": "UNC_M_SB_STRV_DEALLOC.FM_WR", "EventName": "UNC_M_SB_STRV_DEALLOC.FM_WR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x8", "UMask": "0x8",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.NM_RD", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.NM_RD",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xde", "EventCode": "0xde",
"EventName": "UNC_M_SB_STRV_DEALLOC.NMRD", "EventName": "UNC_M_SB_STRV_DEALLOC.NMRD",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x1", "UMask": "0x1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.NM_WR", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.NM_WR",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xde", "EventCode": "0xde",
"EventName": "UNC_M_SB_STRV_DEALLOC.NMWR", "EventName": "UNC_M_SB_STRV_DEALLOC.NMWR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Near Mem Read - Set", "BriefDescription": ": Near Mem Read - Set",
"Counter": "0,1,2,3",
"EventCode": "0xDE", "EventCode": "0xDE",
"EventName": "UNC_M_SB_STRV_DEALLOC.NM_RD", "EventName": "UNC_M_SB_STRV_DEALLOC.NM_RD",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x1", "UMask": "0x1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Near Mem Write - Set", "BriefDescription": ": Near Mem Write - Set",
"Counter": "0,1,2,3",
"EventCode": "0xDE", "EventCode": "0xDE",
"EventName": "UNC_M_SB_STRV_DEALLOC.NM_WR", "EventName": "UNC_M_SB_STRV_DEALLOC.NM_WR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.FM_RD", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.FM_RD",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd8", "EventCode": "0xd8",
"EventName": "UNC_M_SB_STRV_OCC.FMRD", "EventName": "UNC_M_SB_STRV_OCC.FMRD",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x2", "UMask": "0x2",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.FM_TGR", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.FM_TGR",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd8", "EventCode": "0xd8",
"EventName": "UNC_M_SB_STRV_OCC.FMTGR", "EventName": "UNC_M_SB_STRV_OCC.FMTGR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x10", "UMask": "0x10",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.FM_WR", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.FM_WR",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd8", "EventCode": "0xd8",
"EventName": "UNC_M_SB_STRV_OCC.FMWR", "EventName": "UNC_M_SB_STRV_OCC.FMWR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x8", "UMask": "0x8",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Far Mem Read", "BriefDescription": ": Far Mem Read",
"Counter": "0,1,2,3",
"EventCode": "0xD8", "EventCode": "0xD8",
"EventName": "UNC_M_SB_STRV_OCC.FM_RD", "EventName": "UNC_M_SB_STRV_OCC.FM_RD",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x2", "UMask": "0x2",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Near Mem Read - Clear", "BriefDescription": ": Near Mem Read - Clear",
"Counter": "0,1,2,3",
"EventCode": "0xD8", "EventCode": "0xD8",
"EventName": "UNC_M_SB_STRV_OCC.FM_TGR", "EventName": "UNC_M_SB_STRV_OCC.FM_TGR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x10", "UMask": "0x10",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Far Mem Write", "BriefDescription": ": Far Mem Write",
"Counter": "0,1,2,3",
"EventCode": "0xD8", "EventCode": "0xD8",
"EventName": "UNC_M_SB_STRV_OCC.FM_WR", "EventName": "UNC_M_SB_STRV_OCC.FM_WR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x8", "UMask": "0x8",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.NM_RD", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.NM_RD",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd8", "EventCode": "0xd8",
"EventName": "UNC_M_SB_STRV_OCC.NMRD", "EventName": "UNC_M_SB_STRV_OCC.NMRD",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x1", "UMask": "0x1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.NM_WR", "BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.NM_WR",
"Counter": "0,1,2,3",
"Deprecated": "1", "Deprecated": "1",
"EventCode": "0xd8", "EventCode": "0xd8",
"EventName": "UNC_M_SB_STRV_OCC.NMWR", "EventName": "UNC_M_SB_STRV_OCC.NMWR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Near Mem Read", "BriefDescription": ": Near Mem Read",
"Counter": "0,1,2,3",
"EventCode": "0xD8", "EventCode": "0xD8",
"EventName": "UNC_M_SB_STRV_OCC.NM_RD", "EventName": "UNC_M_SB_STRV_OCC.NM_RD",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x1", "UMask": "0x1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": ": Near Mem Write", "BriefDescription": ": Near Mem Write",
"Counter": "0,1,2,3",
"EventCode": "0xD8", "EventCode": "0xD8",
"EventName": "UNC_M_SB_STRV_OCC.NM_WR", "EventName": "UNC_M_SB_STRV_OCC.NM_WR",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "UNC_M_SB_TAGGED.DDR4_CMP", "BriefDescription": "UNC_M_SB_TAGGED.DDR4_CMP",
"Counter": "0,1,2,3",
"EventCode": "0xDD", "EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.DDR4_CMP", "EventName": "UNC_M_SB_TAGGED.DDR4_CMP",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x8", "UMask": "0x8",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "UNC_M_SB_TAGGED.NEW", "BriefDescription": "UNC_M_SB_TAGGED.NEW",
"Counter": "0,1,2,3",
"EventCode": "0xDD", "EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.NEW", "EventName": "UNC_M_SB_TAGGED.NEW",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x1", "UMask": "0x1",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "UNC_M_SB_TAGGED.OCC", "BriefDescription": "UNC_M_SB_TAGGED.OCC",
"Counter": "0,1,2,3",
"EventCode": "0xDD", "EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.OCC", "EventName": "UNC_M_SB_TAGGED.OCC",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x80", "UMask": "0x80",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "UNC_M_SB_TAGGED.PMM0_CMP", "BriefDescription": "UNC_M_SB_TAGGED.PMM0_CMP",
"Counter": "0,1,2,3",
"EventCode": "0xDD", "EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.PMM0_CMP", "EventName": "UNC_M_SB_TAGGED.PMM0_CMP",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x10", "UMask": "0x10",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "UNC_M_SB_TAGGED.PMM1_CMP", "BriefDescription": "UNC_M_SB_TAGGED.PMM1_CMP",
"Counter": "0,1,2,3",
"EventCode": "0xDD", "EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.PMM1_CMP", "EventName": "UNC_M_SB_TAGGED.PMM1_CMP",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x20", "UMask": "0x20",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "UNC_M_SB_TAGGED.PMM2_CMP", "BriefDescription": "UNC_M_SB_TAGGED.PMM2_CMP",
"Counter": "0,1,2,3",
"EventCode": "0xDD", "EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.PMM2_CMP", "EventName": "UNC_M_SB_TAGGED.PMM2_CMP",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x40", "UMask": "0x40",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "UNC_M_SB_TAGGED.RD_HIT", "BriefDescription": "UNC_M_SB_TAGGED.RD_HIT",
"Counter": "0,1,2,3",
"EventCode": "0xDD", "EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.RD_HIT", "EventName": "UNC_M_SB_TAGGED.RD_HIT",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x2", "UMask": "0x2",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "UNC_M_SB_TAGGED.RD_MISS", "BriefDescription": "UNC_M_SB_TAGGED.RD_MISS",
"Counter": "0,1,2,3",
"EventCode": "0xDD", "EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.RD_MISS", "EventName": "UNC_M_SB_TAGGED.RD_MISS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"UMask": "0x4", "UMask": "0x4",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "2LM Tag Check : Hit in Near Memory Cache", "BriefDescription": "2LM Tag Check : Hit in Near Memory Cache",
"Counter": "0,1,2,3",
"EventCode": "0xD3", "EventCode": "0xD3",
"EventName": "UNC_M_TAGCHK.HIT", "EventName": "UNC_M_TAGCHK.HIT",
"PerPkg": "1", "PerPkg": "1",
...@@ -1411,6 +1725,7 @@ ...@@ -1411,6 +1725,7 @@
}, },
{ {
"BriefDescription": "2LM Tag Check : Miss, no data in this line", "BriefDescription": "2LM Tag Check : Miss, no data in this line",
"Counter": "0,1,2,3",
"EventCode": "0xD3", "EventCode": "0xD3",
"EventName": "UNC_M_TAGCHK.MISS_CLEAN", "EventName": "UNC_M_TAGCHK.MISS_CLEAN",
"PerPkg": "1", "PerPkg": "1",
...@@ -1419,6 +1734,7 @@ ...@@ -1419,6 +1734,7 @@
}, },
{ {
"BriefDescription": "2LM Tag Check : Miss, existing data may be evicted to Far Memory", "BriefDescription": "2LM Tag Check : Miss, existing data may be evicted to Far Memory",
"Counter": "0,1,2,3",
"EventCode": "0xD3", "EventCode": "0xD3",
"EventName": "UNC_M_TAGCHK.MISS_DIRTY", "EventName": "UNC_M_TAGCHK.MISS_DIRTY",
"PerPkg": "1", "PerPkg": "1",
...@@ -1427,6 +1743,7 @@ ...@@ -1427,6 +1743,7 @@
}, },
{ {
"BriefDescription": "2LM Tag Check : Read Hit in Near Memory Cache", "BriefDescription": "2LM Tag Check : Read Hit in Near Memory Cache",
"Counter": "0,1,2,3",
"EventCode": "0xD3", "EventCode": "0xD3",
"EventName": "UNC_M_TAGCHK.NM_RD_HIT", "EventName": "UNC_M_TAGCHK.NM_RD_HIT",
"PerPkg": "1", "PerPkg": "1",
...@@ -1435,6 +1752,7 @@ ...@@ -1435,6 +1752,7 @@
}, },
{ {
"BriefDescription": "2LM Tag Check : Write Hit in Near Memory Cache", "BriefDescription": "2LM Tag Check : Write Hit in Near Memory Cache",
"Counter": "0,1,2,3",
"EventCode": "0xD3", "EventCode": "0xD3",
"EventName": "UNC_M_TAGCHK.NM_WR_HIT", "EventName": "UNC_M_TAGCHK.NM_WR_HIT",
"PerPkg": "1", "PerPkg": "1",
...@@ -1443,24 +1761,30 @@ ...@@ -1443,24 +1761,30 @@
}, },
{ {
"BriefDescription": "Write Pending Queue Full Cycles", "BriefDescription": "Write Pending Queue Full Cycles",
"Counter": "0,1,2,3",
"EventCode": "0x22", "EventCode": "0x22",
"EventName": "UNC_M_WPQ_CYCLES_FULL_PCH0", "EventName": "UNC_M_WPQ_CYCLES_FULL_PCH0",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Write Pending Queue Full Cycles : Counts the number of cycles when the Write Pending Queue is full. When the WPQ is full, the HA will not be able to issue any additional write requests into the iMC. This count should be similar count in the CHA which tracks the number of cycles that the CHA has no WPQ credits, just somewhat smaller to account for the credit return overhead.", "PublicDescription": "Write Pending Queue Full Cycles : Counts the number of cycles when the Write Pending Queue is full. When the WPQ is full, the HA will not be able to issue any additional write requests into the iMC. This count should be similar count in the CHA which tracks the number of cycles that the CHA has no WPQ credits, just somewhat smaller to account for the credit return overhead.",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Write Pending Queue Full Cycles", "BriefDescription": "Write Pending Queue Full Cycles",
"Counter": "0,1,2,3",
"EventCode": "0x16", "EventCode": "0x16",
"EventName": "UNC_M_WPQ_CYCLES_FULL_PCH1", "EventName": "UNC_M_WPQ_CYCLES_FULL_PCH1",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Write Pending Queue Full Cycles : Counts the number of cycles when the Write Pending Queue is full. When the WPQ is full, the HA will not be able to issue any additional write requests into the iMC. This count should be similar count in the CHA which tracks the number of cycles that the CHA has no WPQ credits, just somewhat smaller to account for the credit return overhead.", "PublicDescription": "Write Pending Queue Full Cycles : Counts the number of cycles when the Write Pending Queue is full. When the WPQ is full, the HA will not be able to issue any additional write requests into the iMC. This count should be similar count in the CHA which tracks the number of cycles that the CHA has no WPQ credits, just somewhat smaller to account for the credit return overhead.",
"Unit": "iMC" "Unit": "iMC"
}, },
{ {
"BriefDescription": "Write Pending Queue Not Empty", "BriefDescription": "Write Pending Queue Not Empty",
"Counter": "0,1,2,3",
"EventCode": "0x21", "EventCode": "0x21",
"EventName": "UNC_M_WPQ_CYCLES_NE.PCH0", "EventName": "UNC_M_WPQ_CYCLES_NE.PCH0",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Write Pending Queue Not Empty : Counts the number of cycles that the Write Pending Queue is not empty. This can then be used to calculate the average queue occupancy (in conjunction with the WPQ Occupancy Accumulation count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies.", "PublicDescription": "Write Pending Queue Not Empty : Counts the number of cycles that the Write Pending Queue is not empty. This can then be used to calculate the average queue occupancy (in conjunction with the WPQ Occupancy Accumulation count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies.",
"UMask": "0x1", "UMask": "0x1",
...@@ -1468,8 +1792,10 @@ ...@@ -1468,8 +1792,10 @@
}, },
{ {
"BriefDescription": "Write Pending Queue Not Empty", "BriefDescription": "Write Pending Queue Not Empty",
"Counter": "0,1,2,3",
"EventCode": "0x21", "EventCode": "0x21",
"EventName": "UNC_M_WPQ_CYCLES_NE.PCH1", "EventName": "UNC_M_WPQ_CYCLES_NE.PCH1",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Write Pending Queue Not Empty : Counts the number of cycles that the Write Pending Queue is not empty. This can then be used to calculate the average queue occupancy (in conjunction with the WPQ Occupancy Accumulation count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies.", "PublicDescription": "Write Pending Queue Not Empty : Counts the number of cycles that the Write Pending Queue is not empty. This can then be used to calculate the average queue occupancy (in conjunction with the WPQ Occupancy Accumulation count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies.",
"UMask": "0x2", "UMask": "0x2",
...@@ -1477,6 +1803,7 @@ ...@@ -1477,6 +1803,7 @@
}, },
{ {
"BriefDescription": "Write Pending Queue Allocations", "BriefDescription": "Write Pending Queue Allocations",
"Counter": "0,1,2,3",
"EventCode": "0x20", "EventCode": "0x20",
"EventName": "UNC_M_WPQ_INSERTS.PCH0", "EventName": "UNC_M_WPQ_INSERTS.PCH0",
"PerPkg": "1", "PerPkg": "1",
...@@ -1486,6 +1813,7 @@ ...@@ -1486,6 +1813,7 @@
}, },
{ {
"BriefDescription": "Write Pending Queue Allocations", "BriefDescription": "Write Pending Queue Allocations",
"Counter": "0,1,2,3",
"EventCode": "0x20", "EventCode": "0x20",
"EventName": "UNC_M_WPQ_INSERTS.PCH1", "EventName": "UNC_M_WPQ_INSERTS.PCH1",
"PerPkg": "1", "PerPkg": "1",
...@@ -1495,6 +1823,7 @@ ...@@ -1495,6 +1823,7 @@
}, },
{ {
"BriefDescription": "Write Pending Queue Occupancy", "BriefDescription": "Write Pending Queue Occupancy",
"Counter": "0,1,2,3",
"EventCode": "0x82", "EventCode": "0x82",
"EventName": "UNC_M_WPQ_OCCUPANCY_PCH0", "EventName": "UNC_M_WPQ_OCCUPANCY_PCH0",
"PerPkg": "1", "PerPkg": "1",
...@@ -1503,6 +1832,7 @@ ...@@ -1503,6 +1832,7 @@
}, },
{ {
"BriefDescription": "Write Pending Queue Occupancy", "BriefDescription": "Write Pending Queue Occupancy",
"Counter": "0,1,2,3",
"EventCode": "0x83", "EventCode": "0x83",
"EventName": "UNC_M_WPQ_OCCUPANCY_PCH1", "EventName": "UNC_M_WPQ_OCCUPANCY_PCH1",
"PerPkg": "1", "PerPkg": "1",
...@@ -1511,8 +1841,10 @@ ...@@ -1511,8 +1841,10 @@
}, },
{ {
"BriefDescription": "Write Pending Queue CAM Match", "BriefDescription": "Write Pending Queue CAM Match",
"Counter": "0,1,2,3",
"EventCode": "0x23", "EventCode": "0x23",
"EventName": "UNC_M_WPQ_READ_HIT.PCH0", "EventName": "UNC_M_WPQ_READ_HIT.PCH0",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Write Pending Queue CAM Match : Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.", "PublicDescription": "Write Pending Queue CAM Match : Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.",
"UMask": "0x1", "UMask": "0x1",
...@@ -1520,8 +1852,10 @@ ...@@ -1520,8 +1852,10 @@
}, },
{ {
"BriefDescription": "Write Pending Queue CAM Match", "BriefDescription": "Write Pending Queue CAM Match",
"Counter": "0,1,2,3",
"EventCode": "0x23", "EventCode": "0x23",
"EventName": "UNC_M_WPQ_READ_HIT.PCH1", "EventName": "UNC_M_WPQ_READ_HIT.PCH1",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Write Pending Queue CAM Match : Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.", "PublicDescription": "Write Pending Queue CAM Match : Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.",
"UMask": "0x2", "UMask": "0x2",
...@@ -1529,8 +1863,10 @@ ...@@ -1529,8 +1863,10 @@
}, },
{ {
"BriefDescription": "Write Pending Queue CAM Match", "BriefDescription": "Write Pending Queue CAM Match",
"Counter": "0,1,2,3",
"EventCode": "0x24", "EventCode": "0x24",
"EventName": "UNC_M_WPQ_WRITE_HIT.PCH0", "EventName": "UNC_M_WPQ_WRITE_HIT.PCH0",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Write Pending Queue CAM Match : Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.", "PublicDescription": "Write Pending Queue CAM Match : Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.",
"UMask": "0x1", "UMask": "0x1",
...@@ -1538,8 +1874,10 @@ ...@@ -1538,8 +1874,10 @@
}, },
{ {
"BriefDescription": "Write Pending Queue CAM Match", "BriefDescription": "Write Pending Queue CAM Match",
"Counter": "0,1,2,3",
"EventCode": "0x24", "EventCode": "0x24",
"EventName": "UNC_M_WPQ_WRITE_HIT.PCH1", "EventName": "UNC_M_WPQ_WRITE_HIT.PCH1",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Write Pending Queue CAM Match : Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.", "PublicDescription": "Write Pending Queue CAM Match : Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.",
"UMask": "0x2", "UMask": "0x2",
......
[ [
{ {
"BriefDescription": "Clockticks of the power control unit (PCU)", "BriefDescription": "Clockticks of the power control unit (PCU)",
"Counter": "0,1,2,3",
"EventName": "UNC_P_CLOCKTICKS", "EventName": "UNC_P_CLOCKTICKS",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Clockticks of the power control unit (PCU) : The PCU runs off a fixed 1 GHz clock. This event counts the number of pclk cycles measured while the counter was enabled. The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall time.", "PublicDescription": "Clockticks of the power control unit (PCU) : The PCU runs off a fixed 1 GHz clock. This event counts the number of pclk cycles measured while the counter was enabled. The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall time.",
...@@ -8,147 +9,185 @@ ...@@ -8,147 +9,185 @@
}, },
{ {
"BriefDescription": "UNC_P_CORE_TRANSITION_CYCLES", "BriefDescription": "UNC_P_CORE_TRANSITION_CYCLES",
"Counter": "0,1,2,3",
"EventCode": "0x60", "EventCode": "0x60",
"EventName": "UNC_P_CORE_TRANSITION_CYCLES", "EventName": "UNC_P_CORE_TRANSITION_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "UNC_P_DEMOTIONS", "BriefDescription": "UNC_P_DEMOTIONS",
"Counter": "0,1,2,3",
"EventCode": "0x30", "EventCode": "0x30",
"EventName": "UNC_P_DEMOTIONS", "EventName": "UNC_P_DEMOTIONS",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "Phase Shed 0 Cycles", "BriefDescription": "Phase Shed 0 Cycles",
"Counter": "0,1,2,3",
"EventCode": "0x75", "EventCode": "0x75",
"EventName": "UNC_P_FIVR_PS_PS0_CYCLES", "EventName": "UNC_P_FIVR_PS_PS0_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Phase Shed 0 Cycles : Cycles spent in phase-shedding power state 0", "PublicDescription": "Phase Shed 0 Cycles : Cycles spent in phase-shedding power state 0",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "Phase Shed 1 Cycles", "BriefDescription": "Phase Shed 1 Cycles",
"Counter": "0,1,2,3",
"EventCode": "0x76", "EventCode": "0x76",
"EventName": "UNC_P_FIVR_PS_PS1_CYCLES", "EventName": "UNC_P_FIVR_PS_PS1_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Phase Shed 1 Cycles : Cycles spent in phase-shedding power state 1", "PublicDescription": "Phase Shed 1 Cycles : Cycles spent in phase-shedding power state 1",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "Phase Shed 2 Cycles", "BriefDescription": "Phase Shed 2 Cycles",
"Counter": "0,1,2,3",
"EventCode": "0x77", "EventCode": "0x77",
"EventName": "UNC_P_FIVR_PS_PS2_CYCLES", "EventName": "UNC_P_FIVR_PS_PS2_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Phase Shed 2 Cycles : Cycles spent in phase-shedding power state 2", "PublicDescription": "Phase Shed 2 Cycles : Cycles spent in phase-shedding power state 2",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "Phase Shed 3 Cycles", "BriefDescription": "Phase Shed 3 Cycles",
"Counter": "0,1,2,3",
"EventCode": "0x78", "EventCode": "0x78",
"EventName": "UNC_P_FIVR_PS_PS3_CYCLES", "EventName": "UNC_P_FIVR_PS_PS3_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Phase Shed 3 Cycles : Cycles spent in phase-shedding power state 3", "PublicDescription": "Phase Shed 3 Cycles : Cycles spent in phase-shedding power state 3",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "AVX256 Frequency Clipping", "BriefDescription": "AVX256 Frequency Clipping",
"Counter": "0,1,2,3",
"EventCode": "0x49", "EventCode": "0x49",
"EventName": "UNC_P_FREQ_CLIP_AVX256", "EventName": "UNC_P_FREQ_CLIP_AVX256",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "AVX512 Frequency Clipping", "BriefDescription": "AVX512 Frequency Clipping",
"Counter": "0,1,2,3",
"EventCode": "0x4a", "EventCode": "0x4a",
"EventName": "UNC_P_FREQ_CLIP_AVX512", "EventName": "UNC_P_FREQ_CLIP_AVX512",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "Thermal Strongest Upper Limit Cycles", "BriefDescription": "Thermal Strongest Upper Limit Cycles",
"Counter": "0,1,2,3",
"EventCode": "0x04", "EventCode": "0x04",
"EventName": "UNC_P_FREQ_MAX_LIMIT_THERMAL_CYCLES", "EventName": "UNC_P_FREQ_MAX_LIMIT_THERMAL_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Thermal Strongest Upper Limit Cycles : Number of cycles any frequency is reduced due to a thermal limit. Count only if throttling is occurring.", "PublicDescription": "Thermal Strongest Upper Limit Cycles : Number of cycles any frequency is reduced due to a thermal limit. Count only if throttling is occurring.",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "Power Strongest Upper Limit Cycles", "BriefDescription": "Power Strongest Upper Limit Cycles",
"Counter": "0,1,2,3",
"EventCode": "0x05", "EventCode": "0x05",
"EventName": "UNC_P_FREQ_MAX_POWER_CYCLES", "EventName": "UNC_P_FREQ_MAX_POWER_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Power Strongest Upper Limit Cycles : Counts the number of cycles when power is the upper limit on frequency.", "PublicDescription": "Power Strongest Upper Limit Cycles : Counts the number of cycles when power is the upper limit on frequency.",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "IO P Limit Strongest Lower Limit Cycles", "BriefDescription": "IO P Limit Strongest Lower Limit Cycles",
"Counter": "0,1,2,3",
"EventCode": "0x73", "EventCode": "0x73",
"EventName": "UNC_P_FREQ_MIN_IO_P_CYCLES", "EventName": "UNC_P_FREQ_MIN_IO_P_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "IO P Limit Strongest Lower Limit Cycles : Counts the number of cycles when IO P Limit is preventing us from dropping the frequency lower. This algorithm monitors the needs to the IO subsystem on both local and remote sockets and will maintain a frequency high enough to maintain good IO BW. This is necessary for when all the IA cores on a socket are idle but a user still would like to maintain high IO Bandwidth.", "PublicDescription": "IO P Limit Strongest Lower Limit Cycles : Counts the number of cycles when IO P Limit is preventing us from dropping the frequency lower. This algorithm monitors the needs to the IO subsystem on both local and remote sockets and will maintain a frequency high enough to maintain good IO BW. This is necessary for when all the IA cores on a socket are idle but a user still would like to maintain high IO Bandwidth.",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "Cycles spent changing Frequency", "BriefDescription": "Cycles spent changing Frequency",
"Counter": "0,1,2,3",
"EventCode": "0x74", "EventCode": "0x74",
"EventName": "UNC_P_FREQ_TRANS_CYCLES", "EventName": "UNC_P_FREQ_TRANS_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Cycles spent changing Frequency : Counts the number of cycles when the system is changing frequency. This can not be filtered by thread ID. One can also use it with the occupancy counter that monitors number of threads in C0 to estimate the performance impact that frequency transitions had on the system.", "PublicDescription": "Cycles spent changing Frequency : Counts the number of cycles when the system is changing frequency. This can not be filtered by thread ID. One can also use it with the occupancy counter that monitors number of threads in C0 to estimate the performance impact that frequency transitions had on the system.",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "Memory Phase Shedding Cycles", "BriefDescription": "Memory Phase Shedding Cycles",
"Counter": "0,1,2,3",
"EventCode": "0x2F", "EventCode": "0x2F",
"EventName": "UNC_P_MEMORY_PHASE_SHEDDING_CYCLES", "EventName": "UNC_P_MEMORY_PHASE_SHEDDING_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Memory Phase Shedding Cycles : Counts the number of cycles that the PCU has triggered memory phase shedding. This is a mode that can be run in the iMC physicals that saves power at the expense of additional latency.", "PublicDescription": "Memory Phase Shedding Cycles : Counts the number of cycles that the PCU has triggered memory phase shedding. This is a mode that can be run in the iMC physicals that saves power at the expense of additional latency.",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "Package C State Residency - C0", "BriefDescription": "Package C State Residency - C0",
"Counter": "0,1,2,3",
"EventCode": "0x2A", "EventCode": "0x2A",
"EventName": "UNC_P_PKG_RESIDENCY_C0_CYCLES", "EventName": "UNC_P_PKG_RESIDENCY_C0_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Package C State Residency - C0 : Counts the number of cycles when the package was in C0. This event can be used in conjunction with edge detect to count C0 entrances (or exits using invert). Residency events do not include transition times.", "PublicDescription": "Package C State Residency - C0 : Counts the number of cycles when the package was in C0. This event can be used in conjunction with edge detect to count C0 entrances (or exits using invert). Residency events do not include transition times.",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "Package C State Residency - C2E", "BriefDescription": "Package C State Residency - C2E",
"Counter": "0,1,2,3",
"EventCode": "0x2B", "EventCode": "0x2B",
"EventName": "UNC_P_PKG_RESIDENCY_C2E_CYCLES", "EventName": "UNC_P_PKG_RESIDENCY_C2E_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Package C State Residency - C2E : Counts the number of cycles when the package was in C2E. This event can be used in conjunction with edge detect to count C2E entrances (or exits using invert). Residency events do not include transition times.", "PublicDescription": "Package C State Residency - C2E : Counts the number of cycles when the package was in C2E. This event can be used in conjunction with edge detect to count C2E entrances (or exits using invert). Residency events do not include transition times.",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "Package C State Residency - C3", "BriefDescription": "Package C State Residency - C3",
"Counter": "0,1,2,3",
"EventCode": "0x2C", "EventCode": "0x2C",
"EventName": "UNC_P_PKG_RESIDENCY_C3_CYCLES", "EventName": "UNC_P_PKG_RESIDENCY_C3_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Package C State Residency - C3 : Counts the number of cycles when the package was in C3. This event can be used in conjunction with edge detect to count C3 entrances (or exits using invert). Residency events do not include transition times.", "PublicDescription": "Package C State Residency - C3 : Counts the number of cycles when the package was in C3. This event can be used in conjunction with edge detect to count C3 entrances (or exits using invert). Residency events do not include transition times.",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "Package C State Residency - C6", "BriefDescription": "Package C State Residency - C6",
"Counter": "0,1,2,3",
"EventCode": "0x2D", "EventCode": "0x2D",
"EventName": "UNC_P_PKG_RESIDENCY_C6_CYCLES", "EventName": "UNC_P_PKG_RESIDENCY_C6_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Package C State Residency - C6 : Counts the number of cycles when the package was in C6. This event can be used in conjunction with edge detect to count C6 entrances (or exits using invert). Residency events do not include transition times.", "PublicDescription": "Package C State Residency - C6 : Counts the number of cycles when the package was in C6. This event can be used in conjunction with edge detect to count C6 entrances (or exits using invert). Residency events do not include transition times.",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "UNC_P_PMAX_THROTTLED_CYCLES", "BriefDescription": "UNC_P_PMAX_THROTTLED_CYCLES",
"Counter": "0,1,2,3",
"EventCode": "0x06", "EventCode": "0x06",
"EventName": "UNC_P_PMAX_THROTTLED_CYCLES", "EventName": "UNC_P_PMAX_THROTTLED_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "Number of cores in C-State : C0 and C1", "BriefDescription": "Number of cores in C-State : C0 and C1",
"Counter": "0,1,2,3",
"EventCode": "0x80", "EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C0", "EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C0",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Number of cores in C-State : C0 and C1 : This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.", "PublicDescription": "Number of cores in C-State : C0 and C1 : This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"UMask": "0x40", "UMask": "0x40",
...@@ -156,8 +195,10 @@ ...@@ -156,8 +195,10 @@
}, },
{ {
"BriefDescription": "Number of cores in C-State : C3", "BriefDescription": "Number of cores in C-State : C3",
"Counter": "0,1,2,3",
"EventCode": "0x80", "EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C3", "EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C3",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Number of cores in C-State : C3 : This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.", "PublicDescription": "Number of cores in C-State : C3 : This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"UMask": "0x80", "UMask": "0x80",
...@@ -165,8 +206,10 @@ ...@@ -165,8 +206,10 @@
}, },
{ {
"BriefDescription": "Number of cores in C-State : C6 and C7", "BriefDescription": "Number of cores in C-State : C6 and C7",
"Counter": "0,1,2,3",
"EventCode": "0x80", "EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C6", "EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C6",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Number of cores in C-State : C6 and C7 : This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.", "PublicDescription": "Number of cores in C-State : C6 and C7 : This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"UMask": "0xc0", "UMask": "0xc0",
...@@ -174,32 +217,40 @@ ...@@ -174,32 +217,40 @@
}, },
{ {
"BriefDescription": "External Prochot", "BriefDescription": "External Prochot",
"Counter": "0,1,2,3",
"EventCode": "0x0A", "EventCode": "0x0A",
"EventName": "UNC_P_PROCHOT_EXTERNAL_CYCLES", "EventName": "UNC_P_PROCHOT_EXTERNAL_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "External Prochot : Counts the number of cycles that we are in external PROCHOT mode. This mode is triggered when a sensor off the die determines that something off-die (like DRAM) is too hot and must throttle to avoid damaging the chip.", "PublicDescription": "External Prochot : Counts the number of cycles that we are in external PROCHOT mode. This mode is triggered when a sensor off the die determines that something off-die (like DRAM) is too hot and must throttle to avoid damaging the chip.",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "Internal Prochot", "BriefDescription": "Internal Prochot",
"Counter": "0,1,2,3",
"EventCode": "0x09", "EventCode": "0x09",
"EventName": "UNC_P_PROCHOT_INTERNAL_CYCLES", "EventName": "UNC_P_PROCHOT_INTERNAL_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Internal Prochot : Counts the number of cycles that we are in Internal PROCHOT mode. This mode is triggered when a sensor on the die determines that we are too hot and must throttle to avoid damaging the chip.", "PublicDescription": "Internal Prochot : Counts the number of cycles that we are in Internal PROCHOT mode. This mode is triggered when a sensor on the die determines that we are too hot and must throttle to avoid damaging the chip.",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "Total Core C State Transition Cycles", "BriefDescription": "Total Core C State Transition Cycles",
"Counter": "0,1,2,3",
"EventCode": "0x72", "EventCode": "0x72",
"EventName": "UNC_P_TOTAL_TRANSITION_CYCLES", "EventName": "UNC_P_TOTAL_TRANSITION_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "Total Core C State Transition Cycles : Number of cycles spent performing core C state transitions across all cores.", "PublicDescription": "Total Core C State Transition Cycles : Number of cycles spent performing core C state transitions across all cores.",
"Unit": "PCU" "Unit": "PCU"
}, },
{ {
"BriefDescription": "VR Hot", "BriefDescription": "VR Hot",
"Counter": "0,1,2,3",
"EventCode": "0x42", "EventCode": "0x42",
"EventName": "UNC_P_VR_HOT_CYCLES", "EventName": "UNC_P_VR_HOT_CYCLES",
"Experimental": "1",
"PerPkg": "1", "PerPkg": "1",
"PublicDescription": "VR Hot : Number of cycles that a CPU SVID VR is hot. Does not cover DRAM VRs", "PublicDescription": "VR Hot : Number of cycles that a CPU SVID VR is hot. Does not cover DRAM VRs",
"Unit": "PCU" "Unit": "PCU"
......
[ [
{ {
"BriefDescription": "Loads that miss the DTLB and hit the STLB.", "BriefDescription": "Loads that miss the DTLB and hit the STLB.",
"Counter": "0,1,2,3",
"EventCode": "0x08", "EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT", "EventName": "DTLB_LOAD_MISSES.STLB_HIT",
"PublicDescription": "Counts loads that miss the DTLB (Data TLB) and hit the STLB (Second level TLB).", "PublicDescription": "Counts loads that miss the DTLB (Data TLB) and hit the STLB (Second level TLB).",
...@@ -9,6 +10,7 @@ ...@@ -9,6 +10,7 @@
}, },
{ {
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for a demand load.", "BriefDescription": "Cycles when at least one PMH is busy with a page walk for a demand load.",
"Counter": "0,1,2,3",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0x08", "EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_ACTIVE", "EventName": "DTLB_LOAD_MISSES.WALK_ACTIVE",
...@@ -18,6 +20,7 @@ ...@@ -18,6 +20,7 @@
}, },
{ {
"BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (All page sizes)", "BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (All page sizes)",
"Counter": "0,1,2,3",
"EventCode": "0x08", "EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED", "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by demand data loads. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.", "PublicDescription": "Counts completed page walks (all page sizes) caused by demand data loads. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
...@@ -26,6 +29,7 @@ ...@@ -26,6 +29,7 @@
}, },
{ {
"BriefDescription": "Page walks completed due to a demand data load to a 1G page.", "BriefDescription": "Page walks completed due to a demand data load to a 1G page.",
"Counter": "0,1,2,3",
"EventCode": "0x08", "EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G", "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G",
"PublicDescription": "Counts completed page walks (1G sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.", "PublicDescription": "Counts completed page walks (1G sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
...@@ -34,6 +38,7 @@ ...@@ -34,6 +38,7 @@
}, },
{ {
"BriefDescription": "Page walks completed due to a demand data load to a 2M/4M page.", "BriefDescription": "Page walks completed due to a demand data load to a 2M/4M page.",
"Counter": "0,1,2,3",
"EventCode": "0x08", "EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M", "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.", "PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
...@@ -42,6 +47,7 @@ ...@@ -42,6 +47,7 @@
}, },
{ {
"BriefDescription": "Page walks completed due to a demand data load to a 4K page.", "BriefDescription": "Page walks completed due to a demand data load to a 4K page.",
"Counter": "0,1,2,3",
"EventCode": "0x08", "EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K", "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.", "PublicDescription": "Counts completed page walks (4K sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
...@@ -50,6 +56,7 @@ ...@@ -50,6 +56,7 @@
}, },
{ {
"BriefDescription": "Number of page walks outstanding for a demand load in the PMH each cycle.", "BriefDescription": "Number of page walks outstanding for a demand load in the PMH each cycle.",
"Counter": "0,1,2,3",
"EventCode": "0x08", "EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_PENDING", "EventName": "DTLB_LOAD_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding for a demand load in the PMH (Page Miss Handler) each cycle.", "PublicDescription": "Counts the number of page walks outstanding for a demand load in the PMH (Page Miss Handler) each cycle.",
...@@ -58,6 +65,7 @@ ...@@ -58,6 +65,7 @@
}, },
{ {
"BriefDescription": "Stores that miss the DTLB and hit the STLB.", "BriefDescription": "Stores that miss the DTLB and hit the STLB.",
"Counter": "0,1,2,3",
"EventCode": "0x49", "EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT", "EventName": "DTLB_STORE_MISSES.STLB_HIT",
"PublicDescription": "Counts stores that miss the DTLB (Data TLB) and hit the STLB (2nd Level TLB).", "PublicDescription": "Counts stores that miss the DTLB (Data TLB) and hit the STLB (2nd Level TLB).",
...@@ -66,6 +74,7 @@ ...@@ -66,6 +74,7 @@
}, },
{ {
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for a store.", "BriefDescription": "Cycles when at least one PMH is busy with a page walk for a store.",
"Counter": "0,1,2,3",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0x49", "EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_ACTIVE", "EventName": "DTLB_STORE_MISSES.WALK_ACTIVE",
...@@ -75,6 +84,7 @@ ...@@ -75,6 +84,7 @@
}, },
{ {
"BriefDescription": "Store misses in all TLB levels causes a page walk that completes. (All page sizes)", "BriefDescription": "Store misses in all TLB levels causes a page walk that completes. (All page sizes)",
"Counter": "0,1,2,3",
"EventCode": "0x49", "EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED", "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by demand data stores. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.", "PublicDescription": "Counts completed page walks (all page sizes) caused by demand data stores. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
...@@ -83,6 +93,7 @@ ...@@ -83,6 +93,7 @@
}, },
{ {
"BriefDescription": "Page walks completed due to a demand data store to a 1G page.", "BriefDescription": "Page walks completed due to a demand data store to a 1G page.",
"Counter": "0,1,2,3",
"EventCode": "0x49", "EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G", "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G",
"PublicDescription": "Counts completed page walks (1G sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.", "PublicDescription": "Counts completed page walks (1G sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
...@@ -91,6 +102,7 @@ ...@@ -91,6 +102,7 @@
}, },
{ {
"BriefDescription": "Page walks completed due to a demand data store to a 2M/4M page.", "BriefDescription": "Page walks completed due to a demand data store to a 2M/4M page.",
"Counter": "0,1,2,3",
"EventCode": "0x49", "EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M", "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.", "PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
...@@ -99,6 +111,7 @@ ...@@ -99,6 +111,7 @@
}, },
{ {
"BriefDescription": "Page walks completed due to a demand data store to a 4K page.", "BriefDescription": "Page walks completed due to a demand data store to a 4K page.",
"Counter": "0,1,2,3",
"EventCode": "0x49", "EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K", "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.", "PublicDescription": "Counts completed page walks (4K sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
...@@ -107,6 +120,7 @@ ...@@ -107,6 +120,7 @@
}, },
{ {
"BriefDescription": "Number of page walks outstanding for a store in the PMH each cycle.", "BriefDescription": "Number of page walks outstanding for a store in the PMH each cycle.",
"Counter": "0,1,2,3",
"EventCode": "0x49", "EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_PENDING", "EventName": "DTLB_STORE_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding for a store in the PMH (Page Miss Handler) each cycle.", "PublicDescription": "Counts the number of page walks outstanding for a store in the PMH (Page Miss Handler) each cycle.",
...@@ -115,6 +129,7 @@ ...@@ -115,6 +129,7 @@
}, },
{ {
"BriefDescription": "Instruction fetch requests that miss the ITLB and hit the STLB.", "BriefDescription": "Instruction fetch requests that miss the ITLB and hit the STLB.",
"Counter": "0,1,2,3",
"EventCode": "0x85", "EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT", "EventName": "ITLB_MISSES.STLB_HIT",
"PublicDescription": "Counts instruction fetch requests that miss the ITLB (Instruction TLB) and hit the STLB (Second-level TLB).", "PublicDescription": "Counts instruction fetch requests that miss the ITLB (Instruction TLB) and hit the STLB (Second-level TLB).",
...@@ -123,6 +138,7 @@ ...@@ -123,6 +138,7 @@
}, },
{ {
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for code (instruction fetch) request.", "BriefDescription": "Cycles when at least one PMH is busy with a page walk for code (instruction fetch) request.",
"Counter": "0,1,2,3",
"CounterMask": "1", "CounterMask": "1",
"EventCode": "0x85", "EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_ACTIVE", "EventName": "ITLB_MISSES.WALK_ACTIVE",
...@@ -132,6 +148,7 @@ ...@@ -132,6 +148,7 @@
}, },
{ {
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (All page sizes)", "BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (All page sizes)",
"Counter": "0,1,2,3",
"EventCode": "0x85", "EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED", "EventName": "ITLB_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.", "PublicDescription": "Counts completed page walks (all page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
...@@ -140,6 +157,7 @@ ...@@ -140,6 +157,7 @@
}, },
{ {
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (2M/4M)", "BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (2M/4M)",
"Counter": "0,1,2,3",
"EventCode": "0x85", "EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M", "EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.", "PublicDescription": "Counts completed page walks (2M/4M page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
...@@ -148,6 +166,7 @@ ...@@ -148,6 +166,7 @@
}, },
{ {
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (4K)", "BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (4K)",
"Counter": "0,1,2,3",
"EventCode": "0x85", "EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_4K", "EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.", "PublicDescription": "Counts completed page walks (4K page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
...@@ -156,6 +175,7 @@ ...@@ -156,6 +175,7 @@
}, },
{ {
"BriefDescription": "Number of page walks outstanding for an outstanding code request in the PMH each cycle.", "BriefDescription": "Number of page walks outstanding for an outstanding code request in the PMH each cycle.",
"Counter": "0,1,2,3",
"EventCode": "0x85", "EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_PENDING", "EventName": "ITLB_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding for an outstanding code (instruction fetch) request in the PMH (Page Miss Handler) each cycle.", "PublicDescription": "Counts the number of page walks outstanding for an outstanding code (instruction fetch) request in the PMH (Page Miss Handler) each cycle.",
...@@ -164,6 +184,7 @@ ...@@ -164,6 +184,7 @@
}, },
{ {
"BriefDescription": "DTLB flush attempts of the thread-specific entries", "BriefDescription": "DTLB flush attempts of the thread-specific entries",
"Counter": "0,1,2,3",
"EventCode": "0xBD", "EventCode": "0xBD",
"EventName": "TLB_FLUSH.DTLB_THREAD", "EventName": "TLB_FLUSH.DTLB_THREAD",
"PublicDescription": "Counts the number of DTLB flush attempts of the thread-specific entries.", "PublicDescription": "Counts the number of DTLB flush attempts of the thread-specific entries.",
...@@ -172,6 +193,7 @@ ...@@ -172,6 +193,7 @@
}, },
{ {
"BriefDescription": "STLB flush attempts", "BriefDescription": "STLB flush attempts",
"Counter": "0,1,2,3",
"EventCode": "0xBD", "EventCode": "0xBD",
"EventName": "TLB_FLUSH.STLB_ANY", "EventName": "TLB_FLUSH.STLB_ANY",
"PublicDescription": "Counts the number of any STLB flush attempts (such as entire, VPID, PCID, InvPage, CR3 write, etc.).", "PublicDescription": "Counts the number of any STLB flush attempts (such as entire, VPID, PCID, InvPage, CR3 write, etc.).",
......
...@@ -15,7 +15,7 @@ GenuineIntel-6-A[DE],v1.02,graniterapids,core ...@@ -15,7 +15,7 @@ GenuineIntel-6-A[DE],v1.02,graniterapids,core
GenuineIntel-6-(3C|45|46),v35,haswell,core GenuineIntel-6-(3C|45|46),v35,haswell,core
GenuineIntel-6-3F,v28,haswellx,core GenuineIntel-6-3F,v28,haswellx,core
GenuineIntel-6-7[DE],v1.22,icelake,core GenuineIntel-6-7[DE],v1.22,icelake,core
GenuineIntel-6-6[AC],v1.24,icelakex,core GenuineIntel-6-6[AC],v1.26,icelakex,core
GenuineIntel-6-3A,v24,ivybridge,core GenuineIntel-6-3A,v24,ivybridge,core
GenuineIntel-6-3E,v24,ivytown,core GenuineIntel-6-3E,v24,ivytown,core
GenuineIntel-6-2D,v24,jaketown,core GenuineIntel-6-2D,v24,jaketown,core
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment